Hacker News new | past | comments | ask | show | jobs | submit login

But what if I don't have a heap? Not even a wrappable heap.

I could be an OS bootstrapping layer, a signal handler, an ISR, a process control project operating under strict 'No dynamic allocation!" rules, a thunking layer to get legacy code modes (BIOS says hi!), ...[1]

You're imagining a world where everything is Node or Python or Java, or at the worst C on top of the well-defined standard library. And I'm telling you that the world is bigger than that.

And more specifically, that those weird layers sometimes need library code too.

[1] (Edited to add) A malware payload, a tracing layer, a compiler-generated stub, a benchmarking hook that can't handle heap latency, ...




> "You're imagining a world where everything is Node or Python or Java, or at the worst C on top of the well-defined standard library. And I'm telling you that the world is bigger than that."

Why do you keep putting words in my mouth?

> "But what if I don't have a heap? Not even a wrappable heap."

I'm forced to repeat myself over again. At no point does my proposed API force you to rely on a heap. On the contrary, it lets you rely on whatever solution works best for you, in your specific case.

In your custom kernel project, your custom allocator() can return a buffer from a memory pool you handle yourself. Your custom deallocator() will reclaim that memory back into your custom memory pool.

In a different project, say a desktop app for Windows 10, the allocator() will simply call malloc(), and the deallocator() will call free().

This way, your allocator() can do whatever. Your deallocator() can do whatever. How is this restrictive in any way shape or form?


> In your custom kernel project, your custom allocator() can return a buffer from a memory pool you handle yourself. Your custom deallocator() will reclaim that memory back into your custom memory pool.

I don't have either. I have a statically allocated buffer big enough for one frame of data, and I need to guarantee that it never gets used twice. My code does not have a custom allocator. It does not allocate.


  void * your_custom_allocator(size_t size)
  {
      // Handle your locks.
      // Sanity checks, assertions, bounds checks, etc...

      void * result = g_your_buffer + g_position;

      // "Commit Memory" from your buffer.
      g_position += size;

      // Some more code...

      return result;
  }
Now, you can happily use:

  BASE64DECODE_DecodeEx(..., your_custom_allocator, your_custom_deallocator);
What else do you need? You seem very disturbed by the use of the word "allocator" here, feel free to rename to whatever works for you.


> What else do you need?

A guarantee that this "allocator function" is only ever called once.


Are you seriously suggesting this is an issue? That's entirely up to you to solve in your custom allocator.

Use whatever mechanism is available to you. Use a global condition variable, check it atomically every time you're entering your custom allocator, increment after a successful allocation. I don't know your system's constraints, nor should I...


I'm not talking about concurrency. I'm talking about needing to know exactly how many bytes are being allocated ahead of time, because I've got 192k of ram, and 112k of them are spoken for by I/O buffers.

If I pass in an allocator that returns the statically allocated buffer, then the second call to it must abort loudly.


> In your custom kernel project, your custom allocator() can return a buffer from a memory pool you handle yourself. Your custom deallocator() will reclaim that memory back into your custom memory pool.

You realize you're arguing that a custom probably buggy heap implementation isn't a heap right?


> In your custom kernel project, your custom allocator() can return a buffer from a memory pool you handle yourself.

We're done. "It's OK, you can just write your own heap-like API!" is just not remotely responsive to the kind of problems I'm talking about, and that you think it is is sorely tempting me to put more words in your mouth.

If you don't think these libraries are useful, that's fine. Don't use them. Don't presume to understand the application realm before you've worked in it.


I also work in the resource-constrained / embedded native space and have had to work within the kinds of constraints you're describing. I think you're severely misunderstanding what the comment you're responding to is proposing.


Then you'll have to point me to a real world example of an API that works like that, because this is balderdash (seriously? Implement malloc and free on top of a stack-based memory pool just to decode a buffer?) to my eyes.


But what's the alternative to passing custom allocators and deallocators if you want to tightly control the way a library manages memory? If you're running with such constraints, presumably you want to be in control of memory management and not just leaving the library to do its own thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: