Actually that book is paywalled now. It's been somewhat of a controversy that people contributed to the book thinking it was a community resource but now it's unavailable.
Back in the '80s in Bulgaria, I was only able to find the Bresenham's algorithm for circles, but needed to draw an ellipse (in 6052 assembly, well, machine code), and was so proud that I manage to do it.
The lessons you learn on your own stick the best, don't they?
I remember a similar "heureka" moment when I derived a line-drawing algorithm that respected "subpixel" starting and ending points. That is, don't assume a line starts/ends in the middle of a pixel (like most algos do), but rather at an arbitrary point.
I forgot why I needed this (something to do with near-axis-aligned polygon slopes not looking right on a highly rasterized problem). But I do remember the elation when I finally got it to work :-) Also early 1990s, no internet and no English for extra fun.
This is the era where prior art doesn't exist. Why read a book when you can blog about discovering "something new". I could probably make a killing blogging about my "discovery" of the algorithms in the book "Hacker's Delight".
The page claims, "Bresenham's circle algorithm is derived from the midpoint circle algorithm."
The author of this blog post even made it clear, at the end of their article, that... "many explanations of midpoint algorithm use the final, optimized version. But I added several unoptimized steps."
I think there's a lot of value in a blogpost that demonstrates how someone could re-derive a widely-used algorithm from scratch.
This always blows my mind - for the first time in human history we live in a world where almost all prior art is relatively easily discoverable, and people don't even bother.
I guess I shouldn't be surprised, after all, I've met a lot of people.
Indeed, but oh those pens were so expensive and woe to you if a pen ever punctured the paper. That could get messy in a hurry. And then of course the printerbuffer would have another 250K worth of plotter commands and good luck getting that to stop.
With the error initialized to zero, this algorithm will always step immediately after drawing the first pixel, which will cause that pixel to "stick out".
y += 1; // y == 1
err += 2*y + 1; // err == 3
x -= 1; // x == radius-1
err -= 2*x + 1; // err == 2-(2*(radius-1))
You could compare the absolute value of the new and old error, or start with err = -(radius-1) instead.
And you don't need calculus to come up with the algorithm, just simple high school algebra: (x+1)² - x² = 2x + 1.
IEEE double-precision floating-point numbers can exactly represent 53-bit integers and hence they are suitable for most integer related operations (ignoring bitwise operations).
For bitwise operations, JavaScript will first convert the number to a 32-bit two's complement signed integer.
I guess the question is really whether JS takes advantage of these facts.
Will JS, at runtime, realize that X is an int and optimize 2 * X into a bit shift operation?
Will JS recognize that Y and Z are perfectly represented integers stored in floats and so use integer instructions when adding Y + Z? Would such a thing even save time with all the casting back and forth to fp?
Oh, I thought this came only with what was related to EMScripten before WASM (I forgot what the fast optimization standard is/was called). This took some years to propagate to all browsers.
As for interpreted JS, a binary operator returns a 32-bit integer value, but it would be still stored as a Number (float). (Meaning, a | 0 => int, b = a | 0 => float stored in b, there is no other primitive numeric type. – Instead of optimizing for speed, you would be adding implicit type conversions.)
Ah, now I get the confusion. Yeah, most of interpreters probably wouldn't have made the in-memory distinction between small ints and doubles. So the conversion would have to happen back and forth every time a bitwise operator would be applied.
Nowadays the semantics if what happens are still the same, but things happen a bit more optimized by usually avoiding the round-trip to double when not necessary.
Isn't there even a formal specification that hot code maintains values or-ed with zero internally as integers?
(I think, this had been originally introduced by Mozilla and propagated to other browsers to varying extent. This optimization allowed a significant speed up for things like EMScripten before WASM.)
Yes, asm.js, which by now has been abandoned and superseded by WebAssembly. And while Chrome never really supported asm.js, they added some optimizations that benefitted code written in that style.
I think that may also have been a problem with asm.js. While the code would work as is because it's just JS, you won't get any performance guarantees because the runtime could just do its own thing instead of doing ahead-of-time compilation to optimized code.
JS VMs have real ints (and you can summon them smi reliably usually with |0 ), but you're very much better off calling into some browser capability that will leverage the gpu most of the time.