Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Our approach achieves a 3.0% improvement in reducing instruction counts over the compiler, outperforming two state-of-the-art baselines that require thousands of compilations.

If that's their target, then what's the point? LLVM's optimizations are not done to minimize instructions but to maximize performance. On modern processors, these can be very different things.



You're right that a decrease in code size doesn't mean a performance increase (and oftentimes they can be inversely correlated like in inlining).

But LLVM targets both depending upon what optimization pipeline you select. (-Oz/-Os are targeting minimum code size, -O1,-O2,-O3 are optimization focussed).

Code size reduction is critical in some use cases like embedded environments and mobile apps and it is a significant area of research.


Hey, we’re targeting code size in this work, not runtime performance. You would use an option like -O3 to optimize for runtime and -Oz to optimize for code size. The pass pipelines are different for both




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: