Hacker News new | past | comments | ask | show | jobs | submit login

I work in bioinformatics, where the underlying technology changes often enough that you don't have to think too much about portability. If a new fundamentally different system emerges, my old code will probably be obsolete before I have to think about supporting it on the new system. I was around during the 32-bit/64-bit transition (which was painful), and I've been porting code targeting x86 to ARM (which is usually not). Here are some thoughts I've had about portability and system features:

Your code includes your build system and dependency management. If they are not portable, your code is not portable.

I'm a language descriptivist. A standard is an imperfect attempt at describing a language. When the compiler and the standard disagree, the compiler is right, because it decides what the code will actually do.

OpenMP is an essential part of C/C++. Compilers that don't support it can't compile code that claims to be C/C++.

There is no point in pretending that you support systems you don't regularly use. In my case, portability means supporting Linux and macOS, x86 and ARM, GCC and Clang, but not in all combinations.

Portability, simplicity, and performance form a triangle. You can choose two and lose the third, or you can make compromises between them.

Pointers are 64-bit. If they are not, you need a lot of workarounds for handling real data, because size_t is too small.

Computers are little-endian, because that enables all kinds of convenience features such as using data from memory-mapped files directly.

Compilers should warn you if you use integer types of platform-dependent width, unless the width is required to be the same as pointer width. And except as argc and related variables.




I do research computing support, and bioinformatics has long been thought in my circles the nightmare area to support (perhaps supplanted by machine learning these days).

To pick up some of that, compiler maintainers obviously disagree about the compiler always being right (which one?), and I'm baffled by the requirement for a compiler to support OpenMP (which version) to be considered C, especially if you're dealing with, say, embedded systems for bioinformatics data acquisition, for instance.

I've successfully claimed to support systems I'd never used in high-profile projects (particularly when the architecture and operating system landscape was rather more interesting); I couldn't just tell the structural biology users there was no point supporting their hardware. I currently support a GPU-centric POWER9 system, but I hadn't actually used POWER in the years I'd been building packages for it (and ARM).


> To pick up some of that, compiler maintainers obviously disagree about the compiler always being right (which one?), and I'm baffled by the requirement for a compiler to support OpenMP (which version) to be considered C, especially if you're dealing with, say, embedded systems for bioinformatics data acquisition, for instance.

As I said, I'm a language descriptivist. The compiler I'm using right now is right, because it builds the binary. If another compiler has a different opinion, then I may have to deal with multiple dialects. And because the default compilers in many HPC clusters and Linux distributions don't get updated that often, the dialects may continue to be relevant for years.

The situation with OpenMP is similar. A compiler does not have to support OpenMP to be considered a C compiler but to be useful as a C compiler. The bioinformatics software I need to build often uses OpenMP. If a compiler can't build it, it's not doing a very good job of being a compiler. OpenMP versions are not particularly relevant, as the usage is usually very basic. The only version conflict I can remember is from the C++ side: some compilers didn't support parallel range-based for loops.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: