Hacker News new | past | comments | ask | show | jobs | submit login
Better Geometry Through Graph Theory (2018) (ideolalia.com)
161 points by prospero on June 29, 2020 | hide | past | favorite | 33 comments



One geometric model that actually handles numerical approximations is called exact geometric computation. It's pretty interesting and starts by explicitly tracking arithmetic errors in the operations you do. At its core it's a mix of interval arithmetics and multi-precision floating point. This is something you can do since the floating point implementations on most (all?) CPUs give you strong guarantees on the error bound of floating point arithmetic operations and these are usually within 1 ulp (unit in the last place) of the correct result. Now you have an interval for the result and can build your geometric constructs on interval arithmetic rather than classic, single point scalar arithmetic.

Geometry works with constructs, not numbers. An intersection is a construct, collinearity is a construct, orientation / handedness is a construct, etc. If you define your fundamental constructs to be rigorous then the algorithms you build on top will also be rigorous. If we consider the article's inside-outside test example, this result would be computed based on the intersections (intervals) we've computed. Since we know for sure the correct result is within our interval of error then inside-testing boils down to comparing intervals and we know there are three possible situations here: 1. [a, b] < [c, d]; 2 [a, b] > [c, d]; 3. [a, b] overlaps with [c, d]. In case 1 and 2 we have a robust conclusion and in case 3 we recompute the whole predicate with higher precision until it becomes decidable.

The golden standard for exact geometric computation is CGAL [1]. All the research around it is very interesting and refreshing to read.

[1] cgal.org


Hi, author here. I looked into that side of things, but CGAL only offers exact precision implementations for lines and circular arcs. The fact that Bézier curves are left as an exercise for the reader is further proof of the disconnect between computational geometry and modern computer graphics.


Fair point. At the same time the exact computation paradigm is proof that problems like these can be solved in a rigorous way if you start with the right foundation.

By the way thanks for writing the article, it was a refreshing read.


I'm curious; I've played with OpenSCAD a little, which uses CGAL, and found it to be painfully slow (with no particular point of comparison). Do you think you have a way to do similar CSG calculations faster, or at least trade speed for accuracy or something?


It's very plausible that these techniques could be used to fix the output of a CSG library that uses floating point math, but I'm not sure what the specifics would look like. If anyone has ideas in that vein, I'd be very interested to hear them.


I was thinking along the lines (based on skimming documentation) of CGAL using arbitrary precision integer based rationals, which are slow, and using floating point with the error correction might potentially be faster.

Unfortunately, it's probably way beyond my ability to delve in to it.


'Exact real arithmetic' is probably the best starting point for what you're describing here. It does have plenty of application areas besides computational geometry. The issue with simple interval arithmetic is that intervals will widen when chaining multiple computations. To cope with this and obtain rigorous answers, one needs to set a required bound on the final result and do arbitrary-precision computations as entailed by that final bound.


I believe Android's built-in calculator app takes this approach: https://dl.acm.org/doi/pdf/10.1145/2911981

It also uses rational numbers and arbitrarily large integer arithmetic.


I wonder if anyone has thoughts on the approach taken here ("Sound and robust solid modeling via exact real arithmetic and continuity"): https://icfp19.sigplan.org/details/icfp-2019-papers/15/Sound...


I work on developing better tools for solving computational geometry problems robustly and efficiently, so I got quite excited when this paper first appeared.

However, while the type theoretic developments based on Abstract Stone Duality is interesting, they mostly ignore the problem of efficiency by simply representing every real number as a Dedekind cut. Thus, it doesn't scale without significant advances in compiling real arithmetic. A problem I'm working on presently, but it might take a few years...


Sounds like a reinvention of coedges to me, boundary representations track topology and geometry in separated structures. The approximation errors are always a nasty issue and that is the no trivial work on a BREP modeling libraries, for different edge cases you need different algorithms and maybe even transform the geometry into a different parameter space to minimize the approximation errors on computing an intersection between geometries.


I'm not familiar with coedges --- they sound like the darts in the dual map of a planar combinatorial map[1] to me. Do you know if they correspond like this?

In short, a combinatorial map is a graph along with a counterclockwise order of incident half-edges ("darts") at each vertex. This is exactly enough data to record the topological information about how a given connected planar graph is embedded in the plane. They also work for graphs in general oriented surfaces, with the proviso that the complement of the graph consists of a bunch of faces homeomorphic to disks. For example, no annulus faces.

When reading the article, it seemed like what the author was doing was to construct something like a combinatorial map from the purported intersections then use that to answer inside/outside questions. (A graph by itself is unable to answer these questions since it's merely abstract vertices and edges. While the code they use shows the use of graphs[2], the graphs contain the geometric information of the vertex locations, which sort of lets you work with it as if it were a combinatorial map.)

[1] https://en.wikipedia.org/wiki/Combinatorial_map

[2] https://github.com/lacuna/artifex/blob/master/src/io/lacuna/...


Bivector.net

Implementing computer graphics should be done with geometric algebra.

I don't know anything about this field but I know the difference between a clean, consistent API and a random hodgepodge of hacks.

Since I don't know anything I'm not sure if GA allows you to avoid errors such as the one in the article but in typical HN fashion I'll be derailing with some left- field claim and a link to some other project.


As nice as GA is, I can’t see how it would help with the problem mentioned in the article, namely dealing with floating point approximation.


It depends on the specifics of the approach. Some variants of Geometric Algebra like Conformal GA are particularly well suited to boolean operations and for finding intersections. Also, GA can help reduce numerical error in many circumstances where vector algebra has issues, because it is more numerically stable in general.

However, some of the issues raised in that article will still be a problem, and the solutions in either vector algebra or geometric algebra would similarly benefit from this approach.

One related trick I've seen recently is to purposefully degrade floating point performance (e.g.: using fp16 or just zooming in a lot) so that rounding errors and numerical instability can be "visually inspected". This is an under-utilised method that reveals that many common graphics algorithms are designed for infinite precision reals and aren't optimal for IEEE 754 reals.


It feels like lisp - neat, maybe has pedagogical value, does not really solve any hard problems and usually not worth the overhead. Yet I still want to like it.


I'm wondering where did you take the notion, that lisp doesn't solve any hard problems? History proves otherwise over and over.


Not knowing what you consider a hard problem, I think most of them are not of the kind that a language can do much about. It can easily be a drag though, for example by being too high-level and taking control away from the programmer when it's needed. I'm not saying it's necessarily a bad tool (lisp or GA). They are just tools, not magic, and tools don't solve hard problems.


Oh, tools definitely solve hard problems. It's just once you got the right tool for a hard problem, the hard problem ceases to be a hard problem.


Or perhaps, choosing the right tool can reveal that certain, select problems are not actually that hard.

And perhaps, certain other problems, like computational geometry, are in fact just really fucking hard, no matter how you express them. Having some familiarity with the space, where the only really quality implementations are a massive GPL research code base in C++ (CGAL) and commercial C Libraries (Parasolid, SMLib), I lean towards this view.


I have some familiarity with computational geometry myself. I've built myself the tools I need for it in pure Apple Metal, and it beats anything you can buy or get for free (for my particular needs).


Do you mean computational geometry for graphics applications (your mention of Metal suggests this)?

I'm referring to computational geometry for manufacturing and engineering simulation applications, which is an entirely different beast (in particular, accurately tracking topology is much more important, and generally requires arbitrary-precision floats for degenerate cases).


No, manufacturing and engineering, for example, computing the offset of a 3D-body represented by a surface. This benefited heavily from massive parallel computation via Metal.

I also implemented other algorithms from scratch for solid body operations, and here indeed arbitrary precision rationals were needed first, but then I could get it working with normal double arithmetic in a lot of cases later on; I didn't use Metal here though.

I find that libraries like CGAL etc. are just either too slow or not general enough for my purposes. The whole sector seems ripe for disruption via proper algorithms implemented on graphics cards.


If I were you I would try and look up who uses Parasolid then, because they charge a fuckton of money


Thx!


Companies have been using Lisp to accelerate game development with a scripting language in AA game.

Making a game is a hard problem.

See Lisp on Playstation 2: https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp


But nobody big enough uses Lisp to script games anymore.


List didn't fall out of practical use because it's bad, it fell out of practical use because its successors were better.


Dont Naughty Dog still use some type of lisp for their scripting?


Thanks for mentioning bivector.

Check out the demo https://observablehq.com/@enkimute/animated-orbits

Watch the SIGGRAPH talk https://youtu.be/tX4H_ctggYo

Join the discord https://discord.gg/vGY6pPk


Geometric algebra solves a different problem. This is about correctness (aka robustness), which is probably more important.


Also check out Grassmann.jl at https://github.com/chakravala/Grassmann.jl


This is the whole purpose for affine arithmetic, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: