Using fixed point for drawing lines rather than Bresenham's is nothing new.
The thing is that Bresenham is resolution-independent. If the registers used have enough width to represent the coordinates of the entire display, then the line will be drawn correctly. The error term that is propagated through the loop is always precisely correct. There is no cumulative error from using approximations.
You can think of Bresenham's as being like fixed point, but with a variable denominator for the error term which depends on the specific delta. For instance if we are drawing 600 across (x) and 450 up (y), then the error term is basically an accumulator to which we add 450. Whenever it goes over 600, we subtract 600, and increment y. (Okay, so it is not like fixed point but in fact exact rational math. The error term accumulates numerators, which are compared to the denominator.)
We could represent the same approximately with a fixed point number, but that pins us to a denominator. If we have, say, 8 bits of fractional precision, then we work in multiples of 1/256. Now 450/600 is precisely 3/4, and so we are lucky in this case: 3/4 is a multiple of 1/256: the denominator is a power of two, and so we can step the x coordinate by 1, and the y coordinate by 3/4. The "staircase" will be the same as under Bresenham's.
Suppose delta is <453,599>. Bresenham's still deals with this precisely. 453 is added to the error term, and 599 subtracted when it blows past 599. Yet, 599 is a prime, and 453/599 has no exact representation in fixed point. The best we can do is add more bits of precision to minimize the cumulative error. If the display is on the order of 1000x1000 and we go down to 1/65535 sub-pixel precision in our fixed-point coordinates, maybe there is no noticeable artifact.
I spent a few days during the last Christmas holidays to create a fast line renderer for an old 8-bit home computer with a 1.75MHz Z80 CPU and a video memory layout that groups 8 horizontal pixels into one byte. I also started with a generic 'put_pixel' subroutine, but that was way too slow no matter what the outer code looked like, because the video memory address had to be computed every time anew, and a pixel could only be set through a read-modify-write (read byte from video memory, set a bit, and write the byte back).
I ended up with an unrolled loop for 8 horizontal pixels and self-modifying code which jumped into this loop similar to a Duff's Device, and at each increment in y a pixel-mask byte was xor'ed into video memory, and a new jump-position into the pixel-loop was computed for the next 8-pixel-mask.
This brings back memories - Dr. Bresenham was my graphics instructor. We drew lines & circles - lots of them. :)
But the most important thing I learned in his class was that having a well-sorted-out compiler is more important than the choice of language. Most of the class used Borland Turbo Pascal for their projects (it was up to version 5 by that point, I think). I wanted to learn C, and with his approval I used their new version 1.0 Turbo C compiler. What I found was that the Pascal code, with no heroic coding efforts, ran 3-4 times faster than the supposedly "closer to the metal" C.
I was a maverick at university; I used Bresenham's to draw hyperbolas. I had this idea that since hyperbolas are related to 1/z, maybe the approach could be used for scan-converting textures through a perspective transform, but never developed it farther.
We might say that Bresenham's Algorithm is not really about drawing lines; it's about doing a best-possible discrete approximation to linear interpolation, using integer arithmetic.
And since we can easily[1] change that line to any conic section we want, we should be able to use a variation on B's Alg. to do a perspective-correct interpolation, right?
Sounds good to me. However, as perhaps you were, I'm too lazy[2] right now to work out the details. :-)
--------
[1] For certain values of "easy".
[2] Or perhaps "too busy". That sounds a bit better, doesn't it?
I was pulled in so many different directions, and basically stopped hacking on computer graphics.
I guessed that the downfall of the approach would be the iterations spent on the same pixel value in a texture, when, so to speak, Bresenham is "going vertical": marching through Y values while treading on the same X. If X is the texture space, it's wasteful. While rasterizing, it really behooves us to just map backwards from screen pixels, get the color value and move on. On the other hand, the approach could do anti-aliasing: average those values being tread over. That is usually solved with mipmapping, though: pre-filtered textures at multiple resolutions. Again: no iterative work per display pixel.
This matches results I got in the 90s on a 486DX2 - fixed point math was faster than Bresenham's even when both were implemented in ASM (and both gained significant speedup comparing to their C/Pascal implementation I used as a prototype for ASM code). You could use Bresenham's idea even for bitmap rescaling (if you understand the idea, you can probably use it in many rational-number iterative algorithms), and x86/DOS allowed you to overwrite instructions in code segment in case you ran short on registers (hint - use MOV XX, [constant] and then overwrite CS:[constant offset] with the value needed in one of the inner loops). Nowadays nobody would use these though as nobody likes non-antialiased lines anymore.
The thing is that Bresenham is resolution-independent. If the registers used have enough width to represent the coordinates of the entire display, then the line will be drawn correctly. The error term that is propagated through the loop is always precisely correct. There is no cumulative error from using approximations.
You can think of Bresenham's as being like fixed point, but with a variable denominator for the error term which depends on the specific delta. For instance if we are drawing 600 across (x) and 450 up (y), then the error term is basically an accumulator to which we add 450. Whenever it goes over 600, we subtract 600, and increment y. (Okay, so it is not like fixed point but in fact exact rational math. The error term accumulates numerators, which are compared to the denominator.)
We could represent the same approximately with a fixed point number, but that pins us to a denominator. If we have, say, 8 bits of fractional precision, then we work in multiples of 1/256. Now 450/600 is precisely 3/4, and so we are lucky in this case: 3/4 is a multiple of 1/256: the denominator is a power of two, and so we can step the x coordinate by 1, and the y coordinate by 3/4. The "staircase" will be the same as under Bresenham's.
Suppose delta is <453,599>. Bresenham's still deals with this precisely. 453 is added to the error term, and 599 subtracted when it blows past 599. Yet, 599 is a prime, and 453/599 has no exact representation in fixed point. The best we can do is add more bits of precision to minimize the cumulative error. If the display is on the order of 1000x1000 and we go down to 1/65535 sub-pixel precision in our fixed-point coordinates, maybe there is no noticeable artifact.