I need a break. Time to go back when things were simple (or at least appeared to be so) and do something just for fun. Like writing a little game for my old box. A little 8 bit processor, 4k RAM, a weird little graphics chip and the assembler is all you need. But wait, this is tech from 1976! A graphics chip? Yep, we are racing the electron beam again. But they did that in such a clever way that a kid could get it to work. It involved interrupts and DMA and all code that had to stay in sync with the electron beam was contained in that less than 32 instructions long interrupt routine. However, that simplicity still does not come without a price. The graphics chip issues 1024 x 60 DMA requests every second and also calls the interrupt routine 60 times a second as well. Whatever is going on in that interrupt routine adds up very quickly to take away a good percentage of the instructions the processor can 'waste' on such luxuries as executing its program every second. Just how much, exactly? Those interrupt routines come in two flavors and we get two very different values. After all these years I have now taken the time to actually do the math: The worst case are those interrupt routines that manipulate the DMA pointer to repeat every raster line two or more times. To do that, you have to stay in the interrupt routine for the entire duration of the frame, leaving only the vertical blank period for program execution. Just as bad as racing the beam always was. At least you had a more useful vertical resolution this way and required a significantly smaller graphics buffer. Still, this left you with only 33.64% of the CPU time for your program. Ouch. The better option was not to race the beam at all. The interrupt routine merely reset the DMA pointer to the beginning of your graphics buffer for every frame and did not hang around any longer to repeat any scan lines. That left you with a weird resolution of 64 x 128 pixels and required a graphics buffer of 1024 bytes, but also left 71.63% of the CPU time for the actual program. So, which option would you choose? Memory is not as much of an issue as it used to be, but I think I can live with a weird resolution and take the performance gain. By the way, the same old processor, unrestricted by the old graphics chip, gives me more than 12 times the instructions per second compared to that worst case I have been using for 45 years now. And I have not even really tried to overclock it yet.
I have lived with several Zen masters - all of the