Hacker News new | past | comments | ask | show | jobs | submit login

This approach is strongly reminiscent of abstract algebra; for example, the Map-Reduce Transformation example seems to be(?) justified exactly when f is an isomorphism over r.

I've been thinking along somewhat similar lines but I've been looking at it from the perspective of introduction and elimination rules, as found in type theory. If your compiler can find a transformation that brings an intro form and an elim form together, then you can apply the computation rule to remove both from the program. My motivation is to remove allocations that aren't really essential. (I do have a functional language compiler that I work on but it doesn't implement any optimizations yet).

Another interesting idea that seems related is from a Philip Wadler talk[1] I saw which discusses how normalization can be used to achieve something like your "hypothetical inverses", which should always be eliminated from the final result.

> These transformations might only be useful in obvious optimization scenarios (cases that no programmer would intentionally create).

I would be surprised if that turned out to be the case.

[1] "Everything Old is New Again: Quoted Domain Specific Languages" https://www.youtube.com/watch?v=DlBwJ4rvz5c




Yes! I think perhaps that f needs only to be a homomorphism instead of an isomorphism, but I don't trust myself enough to make such a claim definitively :)

Always curious to follow people doing compiler work; loop me in, yeah?

And, just watched that Wadler talk; it's good stuff. Of course, it sounds like his scenario is like: "there is a way to do the transformation, you just have to find it," and in my case, it's more like: "there might or might not be an algebra in which f acts as a homomorphism, and, even if there is, you may not have the transformations you need to determine it!"


That's funny. I was thinking isomorphism because you postulate an inverse, which homomorphisms don't always have. But the identity doesn't depend on f being invertible. You could just have a rule:

If f is a homomorphism with respect to r, then reduce(r, map(f, l)) = f(reduce(r, l)).

The rule can be proven without reference to an inverse for f so isn't it enough to just add such a rewrite rule to your optimizer? (Now I'm not sure what the inversion trick accomplishes?).

> Always curious to follow people doing compiler work; loop me in, yeah?

norstrulde.org and twitter.com/ericbb

> "there might or might not be an algebra in which f acts as a homomorphism, and, even if there is, you may not have the transformations you need to determine it!"

To my way of interpreting it, the algebra is determined by r, which is given. You might not know whether f is a homomorphism with respect to r and it'd be tricky to figure that out by inspecting the definitions for f and r but if you have a table built in advance that characterizes functions in relation to each other (incr is a homomorphism with respect to max, etc.), then you can drive your rewrite engine using that table... So I wouldn't say that you want to use transformations to determine if f is a homomorphism but rather that you want to use the fact that f is a homomorphism to decide which transformations are applicable. Does that make sense?

Edit: I guess, more generally, you could have a relation with the form homomorphism(f:a->b, q:a->a->a, r:b->b->b) which justifies the transformation reduce(r, map(f, l)) -> f(reduce(q, l)).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: