I really welcome the links to the better measurements and graphs. Please give me the hard data, properly presented, don't write the claims without the citations. I really want to learn more.
Correct me if I'm wrong, but even the "generational" GCs present their own problems: the "costs" of having the GC increase not only with having "too little" memory but also with trying to use "too much" memory (as "more than e.g. 6-8 GB, which can be needed on the server applications). As far as I know, only Azul's proprietary GC is claimed to avoid most of the problems typical for practically all other known GCs. fmstephe in his comment here linked to one discussion where the Azul's GC author participated. But I nowhere read the claim that any GC doesn't need significantly more RAM than manual management.
Please be clear. Do you claim it's misleading that the GCs need at least twice as much RAM to be performant? If so, based on what actually do you claim that the graph I linked doesn't support that? Can you give an example of some system that does better, with measurements etc?
> Do you claim it's misleading that the GCs need at least twice as much RAM to be performant?
This is not what you wrote. You said that "most GCs start being really slow even with 3 times more memory than needed with the manual management", while the generational mark-sweep collector has essentially zero overhead with 3x RAM in that benchmark. The "most GCs" you're referring to are algorithms that are decades behind the state of the art.
Also, "really slow" is a fuzzy term and I am not sure how you come to that conclusion from the image.
Remember, they're compared to an oracular allocator that has perfect knowledge about lifetime/reachability without actually having to calculate it. That ideal situation rarely obtains in the real world. The paper uses this case to have a baseline for quantitative comparison (similar to how in some situations speeds are expressed as a fraction of c), not because it represents an actual and realistic implementation.
You answered nothing what I asked from you. I asked for links, measurements, graphs.
Your only arguments: after showing that I wrote "most need even 3 times more" then you give an example of one which needs 2 times more. Then you complain that "really slow" is fuzzy. Then you claim that "ideal situation rarely obtains in the real world."
My point is that you don't understand your own source. The "links, measurements, graphs" are in the paper you referenced, they just say something different from what you believe they are saying.
If you're struggling with understanding the paper, there's really nothing more I can do to help.
Apart from the claim that I use "fuzzy" words or that my set of "most GCs" unsurprisingly doesn't include the kind that Go still doesn't have and probably won't have for some years more, what have I written that you actually refuted?
Correct me if I'm wrong, but even the "generational" GCs present their own problems: the "costs" of having the GC increase not only with having "too little" memory but also with trying to use "too much" memory (as "more than e.g. 6-8 GB, which can be needed on the server applications). As far as I know, only Azul's proprietary GC is claimed to avoid most of the problems typical for practically all other known GCs. fmstephe in his comment here linked to one discussion where the Azul's GC author participated. But I nowhere read the claim that any GC doesn't need significantly more RAM than manual management.