It is funny that the author says to recycle the 1st edition and never show it to anyone. I actually failed to optimally solve an interview question because a topic covered in the 1st edition was removed in the 2nd and other graphics books only have the O(N log N) solution instead of the optimal O(N) solution.
The problem was about vertex welding. Some books, like Real-Time Rendering, only mention the N log N solution.
I was annoyed after the interview and thought, which book would I have had to read to get this question right? The only 2 books I could find that had the optimal answer were the 1st edition of this math book and Real-Time Collision Detection.
I suppose if I was smarter I could have figured out the "bucket" trick. I find interviews too stressful to think clearly.
What a petty interview :( Does the candidate have the desire and skill to find the correct answer given access to the right resources? That's what they need to ask. Not what is in the candidate's brain on the day of the interview.
I've forgotten more than I know, but I know where to find it.
Depending on the job, this might be (somewhat) reasonable. I'd assume that one could still get hired even if one failed that particular question, but showed some sense of understanding about the problem space.
For many problems, someone tackling it might not even understand that there is a problem, and might have no clue where to start searching for answers.
I've seen many developers who burn a crazy amount of hours by going down the wrong path, and then try to StackOverflow or Google their way out of the mess they initiated. It takes experience (either gained or learned) to avoid such traps.
Then again, it really depends on the job. The interview question suggests this was not for a CRUD job, though :)
Another question was about some low level assembly performance instruction trick. Afterwards I did the same thing and tried to figure out which text book I would have had to read to know the answer. I only found one Computer Architecture book that mentioned the instruction and that wasn't the book I read in university.
I've interviewed for 3D graphics jobs and interviewed others. I was definitely only interested in whether the person had the skill to find a solution.
For my first 3D game dev job I just turned up with a floppy which had my 3D engine on it and got them to stick it in a PC. That was all that it needed to get hired. That and be incredibly naive about salary negotiations :D
I also turned down a possible position at Argonaut at Star Fox time. I got that one by arguing repeatedly with Jez San on Usenet lol
I do not Envy a graphics programmer interview. There's at least 3 types of graphics programmers in the modern day and the questions couldn't be more night and day. Imagine studying these specific mathematical approaches but instead you're thrown shader trivia. Or super specific DirectX API calls.
> Not what is in the candidate's brain on the day of the interview.
A close sibling of this mentality lives in academia as well, manifested as that professor who bases 50%+ of one's grade on the final. Also, pass/fail filtering exams (and the cultures that endorse them) generally, such as the MCAT.
Companies don't tell interviewee's why the didn't hire them so are you sure it was these questions that lead to them passing on you? Often you are one among many interviewing at the same time and you may have been given a thumbs up yet someone with more experience is hired.
I agree that it isn't the sole reason why I didn't get the job. If I did great on the other 5 interviews, then my O(n log n) solution probably would have been fine. However, not getting the optimal solution didn't help.
I also struggled to find the answer online. I found it in only 2 books.
Chapter 12 of Real-Time Collision Detection has a section on vertex welding.
The 1st edition of 3D Math Primer has it in Chapter 14.4.2: Vertex Welding.
If you want to merge vertices that are within WELD_EPSILON then you can create a 3D grid of cells with cell sizes being at least as large as 2*WELD_EPSILON and put each vertex in its cell. For each cell, you only need to check the 8 neighboring cells to see if the vertex should be merged with a nearby vertex.
The worst case could be worse than O(N) if most of the vertices end up in the same grid cell but don't require merging.
I had that book! I was like 16. I was so blown away by the blend between math, programming, and graphics. I didn't write a game until a decade later though.
If you work on something small and simple on a constrained timescale with a set deadline first—like a Pac-Man clone, or an Arkanoid clone, or a Vampire Survivors clone—it will take you longer than you would've expected, but it still won't take too long, and you'll come away from the experience with some code you can reuse, some ideas of how to do things better next time, and a ton of confidence due to having explored the problem domain somewhat.
"Punching above your weight" and taking on an experimental or otherwise complex game design for a game development project is a good way to learn, but it can be frustrating and you can get burned out when you run into unforeseen design and implementation issues. I think this is the way most people go about independent game development, and it's how I started out when I was much younger, too. "Punching below your weight" and choosing a smaller-scoped project—ideally one with a predefined and tested design—is extremely underrated. Unless you've done something like this many times before, it might seem "beneath you" (especially if you have some ideas of the ideal long-term game project you want to work on that you're excited about), but it's actually a great way to solidify your skills (especially your design skills, as you play around with modifying an existing game design once you've implemented the basics of it), and prove to yourself that you can finish something you set out to do. Plus, in the process, you might find something unexpected in terms of either tech or design that you can take with you to your next "real" project.
This is still my go-to math book for game dev. Excellent writing style.
I'd also advise it for anyone getting into ML or LLMs, the intuition it can help you build up around the linear algebra stuff like vectors and matrices is very helpful for understanding what's going on in ML.
The cool thing about graphics programming is that often times you aren't even doing "linear algebra" strictly.
A lot of the math for 3D graphics programming uses the concepts of affine spaces/transformations since standard 3x3 matrices don't have enough information to support translation/projection. I had no clue about this branch of math at all until I started learning graphics programming--in fact I think graphics programming requires you to learn the most math of just about any discipline of computer science outside theoretical computer science. The amount of math you need to truly "understand" path tracing is immense.
Homogeneous coordinates is still linear algebra in a 4D space instead of the obvious 3 you would naively pick. Kinda like quaternions where adding an extra degree of freedom suddenly works around some expressivity problem you didn't really know you had but clearly it solves (e.g. gimble lock on euler angles). But the term linear algebra covers high dimensional uses like training neural networks, its the thing that gives us the matrix formulation of stuff.
Quaternions are an extension of complex numbers and are used ubiquitously in graphics programming for both 2D and 3D applications. Most math libraries support them.
If your gripe is over programming languages not having support for 3D math, then I think that should be expected. Most languages try to keep their standard libraries small, and their built-in types smaller. A language that has built-in support for such things would be incredible niche. Maybe JAI is what you're looking for? I'm not sure if that even contains built in support however.
It's certainly possible to interpret numbers as types, but it's not particularly common AFAIK, since there are generally more convenient interpretations of arithmetic for a given problem domain (von Neumann sets, Dedekind cuts, IEEE 754 floating point values, &c &c).
Personally, I've always found that the non-rigidity of the complex numbers (i.e. that when considered as a field, they admit a non-trivial automorphism), which requires the introduction of the Re and Im operators (i.e. projection back to the reals) to banish, to be strongly suggestive that the reals have a somehow more "real" ontological status.
The problem was about vertex welding. Some books, like Real-Time Rendering, only mention the N log N solution.
I was annoyed after the interview and thought, which book would I have had to read to get this question right? The only 2 books I could find that had the optimal answer were the 1st edition of this math book and Real-Time Collision Detection.
I suppose if I was smarter I could have figured out the "bucket" trick. I find interviews too stressful to think clearly.