If I understand correctly (and I'm not sure I do), neut achieves its memory management by not sharing data between structures, and instead copying it. This works well when all data structures are immutable.
However, I feel like it would be more performent to just use reference counting here. After all, incrementing a counter must be faster than a memcpy, no? Since immutable values can't create cycles, no memory will be leaked.
I haven’t done a deep dive into the implementation, but based on the theory employed, particularly the linear nature of CBPV’s computational types, the copying would most likely be elided in all cases except for when a programmer writes a function which explicitly copies data to a new term.
> Since immutable values can't create cycles, no memory will be leaked.
this is not generally true. in a lazy language, you can certainly say:
main = mdo
y <- f x
x <- g y
return y
the requirement is simply that you don't inspect the value of x until later (f makes something, y, to use later; when you use y, it inspects x). x and y now have references to each other.
Yes indeed in Haskell. You have enable RecursiveDo for that particular example to work. I'm not sure why ferzul chose a recursive monadic computation when
However, I feel like it would be more performent to just use reference counting here. After all, incrementing a counter must be faster than a memcpy, no? Since immutable values can't create cycles, no memory will be leaked.