Hacker News new | past | comments | ask | show | jobs | submit login

>Building and restarting the program took a minimum 15 seconds no matter how trivial the change. It became an absolute nightmare to work with and I eventually stopped contributing because it was so frustrating to work with.

Wait. You’re complaining about a 15 second compilation and startup loop?

Don’t take this the wrong way, but I don’t think compiled languages are for you.




> Don’t take this the wrong way, but I don’t think compiled languages are for you.

You just need an adequate build system that can perform incremental compilation and does not run a whole lot of unnecessary steps on every build no matter what changes.

Where I work, our system is huge and includes code written in multiple languages. A change to a "leaf module" (one which is not depended on by many modules) takes a second or two. You only get into 10s of seconds if you change code that affects the external API of a module which triggers the re-build of many other modules (a "core" module) - because in such case there's no escape and many LoC must be re-compiled to verify they still work.

To keep it this way is not easy: you must constantly fix mistakes people make often, like add a new build step which runs unconditionally - everything should run conditionally (normally on whether its inputs or outputs were touched), and optimise things that take too long (e.g. slow tests and moving code that changes often from core to a leaf module).


That's very unhelpful.

I abhor long compile times and I exclusively use staticly typed, compiled/transpiled languages. The solution isn't to just shrug it off, but seriously evaluate how difficult it would be to refactor to smaller modules and if the benefits would be worth it. Sometimes, it's not worth the hassle. But if it's getting to a point where velocity is a concern, and a potential major version is on the horizon, a refactor to reduce code debt, and make things modular can be a real possibility if explained correctly to the right stakeholders.


It’s not meant to be helpful. It’s simply the truth. They’re literally talking about 15 seconds.

I’m sorry, but if you’re looking for subsecond compile times, you’re simply not going to get it in C, C++, Java, or really any statically typed compiled language for any project that isn’t trivial —- no matter how many dynamically linked libraries you break your project up into.

They want a REPL, and they’re just not going to get one while dealing with these technologies.

Even your idea of creating a million tiny libraries to achieve less than 15 second compilation and launch time is insane, because now you’ve just “solved” developer efficiency by creating a deployment and maintenance nightmare.

It’s not a serious solution.


> m sorry, but if you’re looking for subsecond compile times, you’re simply not going to get it in C, C++, Java,

I always get subsecond compile times when actively working in C, until headers are changed. Single files take milliseconds to compile, and you ordinarily aren't changing multiple files at a time.

A 2000 line module I was working with was compiling in about 20ms, with a clean performed beforehand.

My last project, around 10k sloc in about 6 modules compiled and linked in under 1s.

The exception to this is embedded environments which run cmake more often than required (specifically the ESP32 IDF) or similar.


This is not true at all in any of those languages. For example in C, With a good build system setup, modifying just one .c file means only that file needs to be compiled, then you only need to run the linker.

Even if the program is very large and linking everything takes many seconds, breaking up the program into dynamic libraries should get you sub-second compiles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: