Hacker News new | past | comments | ask | show | jobs | submit login

If I understand correctly (and I'm not sure I do), neut achieves its memory management by not sharing data between structures, and instead copying it. This works well when all data structures are immutable.

However, I feel like it would be more performent to just use reference counting here. After all, incrementing a counter must be faster than a memcpy, no? Since immutable values can't create cycles, no memory will be leaked.




I haven’t done a deep dive into the implementation, but based on the theory employed, particularly the linear nature of CBPV’s computational types, the copying would most likely be elided in all cases except for when a programmer writes a function which explicitly copies data to a new term.


I can't believe my good fortune to have a wonderful reader like you, by the way.


> Since immutable values can't create cycles, no memory will be leaked.

this is not generally true. in a lazy language, you can certainly say:

    main = mdo
      y <- f x
      x <- g y
      return y
the requirement is simply that you don't inspect the value of x until later (f makes something, y, to use later; when you use y, it inspects x). x and y now have references to each other.


> in a lazy language, you can certainly say:

Not in haskell. Which language do you specifically have in mind?


Yes indeed in Haskell. You have enable RecursiveDo for that particular example to work. I'm not sure why ferzul chose a recursive monadic computation when

    let { x = 0:y; y = 0:x } in x
seems to demonstrate the same thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: