Hacker News new | past | comments | ask | show | jobs | submit login

The recent CPPCast episode on LFortran might shed some light on that?

https://cppcast.com/lfortran/

In it, the guest talks about how he couldn't rely on LLVM for things like optimizing array operations; LLVM apparently does a poor job of that so he had to implement his own. Given that one of the key selling points of Intel's compiler is that it does a better job with SIMD optimizations, it may be exactly the same story here.




Without listening to that podcast, I'm gonna go out on a limb and say this is a slightly Fortran-specific problem (or at least, not at problem encountered for C or C++), namely how to optimize Fortran array expressions.

The LLVM IR doesn't understand array operations, so the Fortran frontend must 'scalarize' the array expressions, that is, turn them into the equivalent loops. Only after that can the LLVM middle and back-ends try to vectorize that scalar IR for execution on SIMD (or vector, or GPU) HW. The problem is that this scalarization loses some information, and thus for good performance on array code the Fortran frontend must implement some optimizations on array operations before scalarizing.

There is an LLVM subproject called MLIR (https://mlir.llvm.org/ ) that aims to build a higher level IR that understands arrays, and can be useful for things like optimizing deep learning graphs, but also things like Fortran frontends could make use of it. AFAIK the flang Fortran project aims to make use of it, but I haven't followed development that closely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: