Hacker News new | past | comments | ask | show | jobs | submit login

No doubt straight up ARC with refcounting bumps on every function call,etc. vs a good GC with same allocation paths...a good GC will smoke it. No question.

But that isn't how I read what Lattner said here, and I've been thinking similar things. Essentially with ARC the goal is to cheat (to be fair with tracing GC it is as well, just in different ways). And you cheat by implementing an ownership/borrow collector type system behind the scenes for getting rid of the need for RC on most of your objects. Once you do that, you have GC competing with stack allocation + some malloc/free scattered in. It becomes a much more interesting fight.




"No doubt straight up ARC with refcounting bumps on every function call,etc..."

Yeah, but in Swift you don't do that. From what I'm told, the compiler's static analysis has gotten a lot smarter in determining when a ref-count bump is needed, and the increased use of struct-based value types in the language leads more and more to situations where a refcount bump isn't needed at all since there's no object reference in the first place.


This “cheating” has been going on in GC’d languages for decades. Java, JavaScript, and probably other language implementations have what they call “escape analysis”.


note that GCs are starting to cheat too. Julia (for example) is in the process of implementing compile time escape analysis which will be used to (among other things) remove allocations as long as the compiler can prove you won't notice.


Java, Go and JavaScript implementations do escape analysis already.


cool! (I gave the example of Julia, but because it's the best GC implementation (it's not), but because it's the implementation I know).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: