Hacker News new | past | comments | ask | show | jobs | submit login
A Story Of realloc (And Laziness) (httrack.com)
130 points by janerik on April 6, 2014 | hide | past | favorite | 63 comments



Code in the article for realloc is dangerous and wrong:

  void *realloc(void *ptr, size_t size) {
    void *nptr = malloc(size);
    if (nptr == NULL) {
      free(ptr);
      return NULL;
    }
    memcpy(nptr, ptr, size); // KABOOM
    free(ptr);
    return nptr;
  }
Line marked KABOOM copies $DEST_BYTE_COUNT, rather than $SOURCE_BYTE_COUNT.

Say you want to realloc a 1 byte buffer to a 4 byte buffer - you just copied 4 bytes from a 1 byte buffer which means you're reading 3 bytes from 0xDEADBEEF/0xBADF000D/segfault land.

EDIT: Also, this is why the ENTIRE PREMISE of implementing your own reallocator speced to just the realloc prototype doesn't make much sense. You simply don't know the size of the original data with just a C heap pointer as this is not standardized AFAIK.


"Also, this is why the ENTIRE PREMISE of implementing your own reallocator speced to just the realloc prototype doesn't make much sense."

If you're reimplementing realloc() it's pretty easy to know the size of the allocated regions - you just need to store the size somewhere when you allocate a block. One common method is to allocate N extra bytes of memory whenever you do malloc() to hold the block header and return a pointer to (block_address + N) to the user. When you then want to realloc() a block, just look in the block header (N bytes before the user's pointer) for the size.

The block header can store other useful stuff, like debugging information. I once implemented a memory manager for debugging that could generate a list of all leaked blocks at the end of the program with the file names and line numbers where they were allocated.


That would require either replacing malloc as well, or programming to the hairy details of your system's libc (ie knowing how and where it lays out the buffer metadata). The point is not that either are impossible, but that you can't replace realloc without doing one or the other.


Yes, indeed - but the code was not meant to be an actual implementation, just a (bad) minimalistic example.


Not to mention that this realloc incorrectly frees the input ptr on failure. The standard says

    "If the space cannot be allocated, the object [*ptr] shall remain unchanged."


In the context of the code presented all mallocs were returned by sbrk. Even if the previous malloc had been "allocated" with size zero, the sbrk called by this function's invocation of malloc will ensure that there are enough bytes of addressable memory to ensure that, although it will be copying data out of earlier allocations, it will not segfault. realloc makes no guarantees of zero-initializing a newly allocated extent.


I have also found people often unestimate realloc (but have never done the same level of investigation to find out just how clever it is!)

On several occasions I have wanted to use mmap, mremap, and friends more often to do fancy things like copy-on-write memory. However, I always find this whole area depressingly poorly documented, and hard to do (because if you mess up a copy-on-write call, it just turns into a copy it seems, with the same result but less performance).

While it's good realloc is clever, I find it increasingly embarassing how badly C (and even worse, C++ which doesn't even really have realloc (as most C++ types can't be bitwise moved) memory allocation maps to what operating systems efficiently support.


C++ really wants a realloc variant that extends an allocation if it can be extended without a copy, and leaves the allocation unchanged if it can't. The annoying thing is that there's no good reason why this can't exist beyond that the STL allocator interface happens not to have it.


jemalloc's non-standard interface gives you some of what you want, expectially xallocx() http://www.canonware.com/download/jemalloc/jemalloc-latest/d...

There have been C++ templates written that use jemalloc-specific calls; for instance see Folly from facebook. I haven't taken a close look, but I know they do some jemalloc-specific code: https://github.com/facebook/folly/tree/master/folly

The other allocated-related thing that C++ really wants (and could benefit C as well) is "sized deallocation". Most of the time you often know the exact size of the memory you allocated. If you could pass that to free() the allocator could save some work determining it. In the case of C++ the compiler can often do this on your behalf for many "delete" calls (at least in the cases where it knows the exact type). Google did an implementation of this idea and got good results. They submitted a proposal to the standards body but I don't know if there is any recent activity. I hope it does happen though: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n353...


Folly does take advantage of jemalloc to expand allocations in-place when possible, but afaik it doesn't do the more extreme optimization mentioned in the article where pages are moved to a different virtual address without actually paging into memory.

Sized deallocation made it into C++14.


Since 2.1, jemalloc does support using mremap to do large realloc()'s, although it seems to be off by default. You need "./configure --enable-mremap" to get it.

That's good news about sized deallocation, I hadn't noticed that there is an updated "N3778" proposal which apparently was accepted. I still haven't seen the dlmalloc work to support that show up in the main svn branch.


Are you saying that vector<char> with 'grow()' is substantially slower than the given C macro ? By how much ?


It always calls (the equivalent of) malloc + memcpy + free for each grow, so it can be anywhere from the exact same speed (when realloc does the same thing internally) to absurdly slower (in the given case of a large array that has to be paged in). The first case is by far the most common case, but it is something that sometimes matters.


It seems that even vector::shrink_to_fit is implemented that way.

That is, when you call shrink_to_fit, it first copies all elements it has, swaps itself with the new copy, and deletes the original vector. (Probably because there's no portable way of returning only part of a contiguous memory region. Yikes.)

http://stackoverflow.com/questions/2695552/how-to-shrink-to-...


I think the reason is the existence of move/copy constructors, i.e. it's invalid to just move a C++ object in memory (as a realloc would do), in general.

(shrink_to_fit does need to (attempt to) get a smaller allocation, otherwise or wouldn't be shrinking the vector.)


Huh, is it really implemented like that?!

Why use realloc when growing an array you have an interface to? Just add another allocated buffer to the previous allocated areas. When there are too many small areas, consolidate with realloc/free.

Much faster (yes yes, almost always).

(Disclaimer: Last time I used C++ I had hair. :-) )

Edit: OK, thanks plorkyeran.


> Huh, is it really implemented like that?!

Well, there is no grow() method on std::vector, so ... no?

But generally speaking, std::vector implementations are basically required to[1] grow the backing store exponentially so that adding elements to them absolutely does not call malloc on every growth of the vector.

You can make a vector do this kind of pessimistic allocation by calling reserve() for every element you add, which will cause the allocation of exactly the amount you reserved. This would be dumb, though. Reserve is there so you can allocate a precise large number and avoid even the logarithmic cost of allocation in adding elements to the vector.

It's really worth noting that this is a better worst case than the worst case for realloc(), which is entirely entitled to reallocate and copy every single time you call it. You're pretty likely to implement the exact same algorithm as vector if you DIY because of this exact issue when performance is important.

I do agree, though, that it would be nice if there was a failable realloc in C++ (and C for that matter) as described above, where it simply returns NULL if there's no more room in the allocated space. What to do in that event should really be up to the caller, not a black box algorithm sensitive to all sorts of variables.

[1] Because push_back() has a requirement of having amortized constant complexity, which means that it would be non-conforming to have the entire array moved for every push. http://www.cplusplus.com/reference/vector/vector/push_back/

[N] However, std::basic_string allows linear complexity on its push_back, so that may be what the poster meant. I'm not aware of any widely used implementation that actually does it in worse than amortized constant, though.


I was talking of the standard way of sllocating extra buffers, to keep new data. With a layer on top of this, using a series of pointers (8-16) to the underlaying memory areas.

(When you fill up the sub-buffers, you do a realloc to all of them. Future sub-buffers get the new size.)

This needs less copying and (potentially) free calls. On the other hand: An access would need some extra operation to find the right buffer and get an offset into it.

Edit: Ok, thanks. I explained if I was unclear, since you went off on a tangent. It was informative, so it is all good. (I'm not going back to C++ et al anyway. :-) )


I know. I wasn't really replying to that, though.

The standard library actually does have a built in container that's (almost) exactly as you describe, though. It's called std::deque (http://www.cplusplus.com/reference/deque/deque/).


std::vector guarantees that it stores its elements contiguously, as this is required for a lot of use-cases (such as passing the buffer to one of the millions of functions that take just a pointer and a size).


What you're talking about is some sort of rope data structure which is pretty simple to implement on top of std::vector and std::list. The std::vector guarantees contiguous memory locations so you can't do it for that particular container.


This bothers me so much:

    buffer = realloc(buffer, capa);
Yeah, 'cause when it fails we didn't need the old buffer anyway... Might as well leak it.


Serious question from a guy made soft by garbage collection: how frequent is memory allocation failure nowadays, with large memories and virtual memory? Were I to guess from my state of ignorance I'd think that if allocs began to fail, there was no recovery anyhow... so leaking in this case would be one leak right before a forced quit.

Wrong? Are there lots of ways allocation can fail besides low memory conditions?


> how frequent is memory allocation failure nowadays

I'd guess that it varies a lot by domain and project but from what I've seen, pretty common.

> I'd think that if allocs began to fail, there was no recovery anyhow

I think this is what both high-level languages and the Linux "over-commit-by-default" policy have convinced people is the normal behavior. However in my experience it's not that hard to make OOM simply bubble up the stack and have all the callers up the stack free their resources, then let the rest of the program keep running. It doesn't have to be a catastrophic event. You just have to be consistent about handling it, and write code expecting it.

> Are there lots of ways allocation can fail besides low memory conditions?

To think of a few, there's running out of memory, but there's also running out of address space. The latter is not so hard to accomplish on a 32-bit system. You could ask for a chunk of memory where, if you could coalesce all the free space throughout the heap, you may have enough space, but you can't make it into a contiguous allocation.

On Windows I've also seen the kernel run out of nonpaged pool, which is more space constrained than the rest of memory. I've seen this when a lot of I/O is going on. You get things like WriteFile failing with ERROR_NOT_ENOUGH_MEMORY.


On Linux, somewhat infamously, malloc never fails. It will always return a pointer to some fresh part of the address space. It is able to do this because, in turn, sbrk/anonymous mmap never fails - it always allocates some fresh address space. It is able to do this because Linux does not allocate physical memory (or swap) when it assigns address space, but when that address space is actually used. It will happily allocate more address space than it has memory for - a practice known as 'overcommit'. So, on Linux, you can indeed not worry about malloc failing:

http://www.scvalex.net/posts/6/ http://www.drdobbs.com/embedded-systems/malloc-madness/23160...

There are a few caveats to this.

Firstly, malloc actually can fail, not because it runs out of memory, but because it runs out of address space. If have 2^64 bytes of memory in your address space already (2^48 on most practical machines, i believe), then there is no value malloc could return that would satisfy you.

Secondly, this behaviour is configurable. An administrator could configure a Linux system not to do this, and instead to only allocate address space that can be backed with memory. And actually, some things i have read suggest that overcommit is not unlimited to begin with; the kernel will only allocate address space equal to some multiple of the memory it has.

Thirdly, failure is conserved. While malloc can't fail, something else can. Linux's behaviour is essentially fractional reserve banking with address space, and that means that the allocator will sometimes write cheques the page tables can't cash. If it does, if it allocates more address space than it can supply, and if every process attempts to use all the address space that it has been allocated, we have the equivalent of a run on the bank, and there is going to be a failure. The way the failure manifests is through the action of the out-of-memory killer, which picks one process on the system, kills it, and so reclaims the memory allocated to it for distribution to surviving processes:

http://linux-mm.org/OOM_Killer

The OOM killer is a widely-feared bogeyman amongst Linux sysadmins. It sometimes manages to choose exactly the wrong thing as a victim. At one time, and perhaps still, it had a particular grudge against PostgreSQL:

http://thoughts.davisjeff.com/2009/11/29/linux-oom-killer/

And in the last month or so, on systems where i work, i have seen a situation where a Puppet run on an application server provoked the OOM killer into killing the application, and another where a screwed up attempt to create a swap file on an infrastructure server provoked it into killing the SSH daemon and BIND.

I don't know about what other operating systems do. Apparently all modern unixes overcommit address space in much the same way as Linux. However, i can't believe that FreeBSD handles this as crassly as Linux does.


  I don't know about what other operating systems do.
  Apparently all modern unixes overcommit address space in
  much the same way as Linux. However, i can't believe that
  FreeBSD handles this as crassly as Linux does.
Solaris does not (generally, unless you use MAP_NORESERVE w/ mmap).

In general, the Linux kernel's default OOM behaviour is undesirable for the vast majority of enterprise use cases. That's why RedHat and many other vendors used to disable it by default (unknown who still does).

Why is it bad? Simple; imagine your giant database is running with a large address space mapped. Another program decides to allocate a large amount of memory. The Linux kernel sees the tasty database target, kills it, and gives the smaller program its memory. Congratulations, your database just went poof.

There's an article that discusses the advantages/disadvantages with respect to Solaris here:

http://www.oracle.com/technetwork/server-storage/solaris10/s...

The article (while old) still applies today.


> On Linux, somewhat infamously, malloc never fails. It will always return a pointer to some fresh part of the address space. It is able to do this because, in turn, sbrk/anonymous mmap never fails - it always allocates some fresh address space. It is able to do this because Linux does not allocate physical memory (or swap) when it assigns address space, but when that address space is actually used. It will happily allocate more address space than it has memory for - a practice known as 'overcommit'. So, on Linux, you can indeed not worry about malloc failing

True. However, you can disable this behavior if you like by running 'sysctl vm.overcommit_memory=2'; see proc(5).


> On Linux, somewhat infamously, malloc never fails.

Pretty close to true but I think that is a bit of a simplification. I seem to recall for instance on 32-bit Linux it's not hard to get malloc to return NULL: ask for some absurd size, like maybe a few allocations of a gigabyte or two, something that fits in a size_t but a 32-bit address space could not possibly accommodate with all the other things in the address space (stacks, your binary, libraries, kernel-only addresses in the page table, etc).


oom killer always seems to target sshd in my encounters. Handy when you're at home.


Most Linux distributions run using an optimistic memory allocation system, whereby memory (RAM plus swap space) can be over-allocated. On these systems, your program can die due to lack of memory at any point in time. I.e. Even if you test the return values of every malloc() call, you still won't be safe.


I did not believe you, but then I did "man malloc" and sure enough, in the NOTES section at the bottom.

>>By default, Linux follows an optimistic memory allocation strategy. This means that when malloc() returns non-NULL there is no guarantee that the memory really is available.

So it's like airlines overbooking seats; the system just hopes that the memory is available when you actually try to use it. I had no idea. That would be an extremely annoying bug to try and track down. How would one even do it? Is there a way to test if you truly have the memory without segfaulting?


If the memory isn't available, it doesn't segfault; it blocks until memory is available, thanks to either swap or the OOM killer doing its thing.

If you want to ensure that you don't block, you can use mlock().


That's not quite true. The problem is more insidious that that.

Think of the memory requirements of the fork() system call. It clones the current process, making an entire copy of it. Sure, there's lots of copy-on-write optimisation going on, but if you want guaranteed, confirmed memory, you need to have a backing store for that new process. The child process has every right to adjust all of its process memory.

So if a 4GB process calls fork(), you will suddenly need to reserve 4GB of RAM or new swap space for it. Or if you can't allocate that, you will have to make the fork() fail.

This can be terrible for users, since most often a process is going to fork() and then exec() a very small program. And it seems nonsensical for fork() to fail with ENOMEM when it appears that there is lots of free memory left. But to ensure memory is strictly handled, that's what you have to do.

The alternative, which most distributions use, is to optimistically allocate memory. Let the fork() and other calls succeed. But you run the risk of the child process crashing at any point in the future when it touches its own 'confirmed' memory and the OS being unable to allocate space for it. So the memory failures are not discovered at system call points. There's no return code to spot the out of memory condition. The OS can't even freeze up until memory is available because there's no guarantee that memory will become available.


Well that's even more interesting! So you can have a program appear to be stuck and not know why! At least now I know I can use mlock() everywhere to determine if it locked on a write to promised-but-not-yet-available memory.


No. All you can do is try writing to it and see what happens (you can catch the segfault though).

http://opsmonkey.blogspot.co.uk/2007/01/linux-memory-overcom... has some more info.


Failure or not, this is the highway to shitty software with bad user experience (except for very special cases where it makes sense).

For me the funniest part has been that the people who seem entitled to write sloppy software are the exact same set who would have the shrillest voices complaining that firefox is so slow and bloated (although its not anymore)

Many believe that its OK to hog memory, that it is an infinite resource. Many believe it is OK to be slow as long as it meets specs. Many believe your user application is the only application that the user will be running at any point in time. However, when your competition does it leaner and faster, you (not you personally, a generic software) are mostly going to be toast.


many people believe that over engineering is bad. It doesn't mean that it's OK to have crappy software, but that you should focus on things that matter. In other words, it's OK to do X until it's not.

On the other hand that's not an excuse for not understanding how things work and just having faith in some magic layer that somehow would just handle things for you. Doing so it could make it impossible to improve those parts of the system that are important without rewriting everything.


Memory allocation failures are virtually non-existent in modern desktop computers. Good practice is to not test return values from malloc, new, etc.

Memory can be allocated beyond RAM size, so by the time a failure occurs your program really should crash and return its resources.

Embedded systems have fewer resources and some will not have virtual memory and so the situation will be different. But unless you know better, the best practice is still to not check the return from allocators. Running out of memory in a program intended for an embedded platform should be considered a bug.


I respectfully disagree with this. Ignoring return values is _not_ good practice. It is a slippery slope to bad software. By catching these memory errors, a program has the chance to properly teardown and report a message to the user instead of crashing.


Ugh. I would much rather know that the process died because of allocation failure than try to figure out why some code is trying to write to a random null pointer as these are two very different types of bugs.


I'm having a hard time picturing a situation where it would be tough to figure out. Typically you allocate memory to use it right after. errno will be set too.

Of course, there is no reason to not do all your allocation through a wrapper function which does check and abort on failure. I think the point was that surviving malloc failures is a dubious approach - instead go all in, or if it's a long-running service, provide a configurable max memory cap and assume that much will be available.


In one scenario, the process writes "Out of memory." or similar to stderr. In the other, it segfaults. Maybe.

I'll take the clear error message.


In the case of my day job (CCTV application 80% C#, 20% C and C++) writing to a bad pointer will get reported as an AccessViolationException with no hope of getting a dump or a stack trace of the native code. An allocation failure will get translated into an OutOfMemoryException and typically includes stats of what is consuming RAM.


I'll clarify this. I'm not saying you shouldn't ever check return values, that's obviously not the right thing to do. And of course there are exceptions to the general rule. If you're allocating a large chunk of memory and there's a reasonable expectation that it could fail, that should be reported, of course.

In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.

It's not bad to handle allocation failures, but in the vast majority of cases it's very unreasonable to do so. You can write code for it if you want, have fun.

And just to be completely clear, I am ONLY talking about calls to malloc, new, realloc, etc. NOT to OS pools or anything like that. Obviously, if you allocate a 4Mb buffer for something (or the OS does for you), you expect that you might run out. This is ONLY in regards to calls to lower level heap allocators.

I don't think you'll find any experienced programmer recommending that you always check the return from malloc. That's completely absurd. There are always exceptions to the rule, however.


> In the general case, however, if allocating 100 bytes fails, reporting that error is also likely to fail. An actual memory allocation failure on a modern computer running a modern OS is a very rare and very bad situation. It's rarely recoverable.

I call BS on this. First of all, it's not the 100 byte allocation that is likely to fail; chances are it's going to be bigger than 100 bytes and the 100 byte allocation will succeed. (Though that is not 100% either.) Second, the thing you're going to do in response to an allocation failure? You're going to unwind the stack, which will probably lead to some temporary buffers being freed. That already gets you more space to work with. (It's also untrue that you can't report errors without allocating memory but that's a whole other story...)

I suspected when I wrote in this thread that I'd see some handwavy nonsense about how it's impossible to cleanly recover from OOM, but the fact is I've witnessed it happening. I think some people would just rather tear down the entire process than have to think about handling errors, and they make up these falsities about how there's no way to do it in order to self justify... Although, when I think back to a time in which I shared your attitudes, I think the real problem was that I hadn't yet seen it being done well.


If you have time, can you expound on this? Is there, perhaps, an open source project that handles NULL returns from malloc in this way you could point me to?


My first instinct is to say look at something kernel-related. If an allocation fails, taking down the entire system is usually not an option (or not a good one anyway). Searching http://lxr.linux.no/ for "kmalloc" you see a lot of callers handling failure.


Adding a bit more after the fact: most well-written libraries in C are also like this. It's not a library's business to decide to exit the process at any time. The library doesn't know if it's some long-running process that absolutely must keep going, for example.


I'm not sure what you imagine when you say "modern computer running a modern OS". Does this not include anything but desktop PCs and laptops? Because phones and tablets have some rather nasty memory limits for applications to deal with, which developers run into frequently.

The space I work in deals with phones and tablets, as well as other embedded systems (TVs, set-top boxes, etc.) that tend to run things people think of as "modern" (recentish Linux kernels, userlands based on Android or centered around WebKit), while having serious limits on memory and storage. My desktop calendar application uses more memory than we have available on some of these systems.

In these environments, it is essential to either avoid any possibility of memory exhaustion, or have ways to gracefully deal with the inevitable. This is often quite easy in theory -- several megabytes of memory might be used by a cached data structure that can easily be re-loaded or re-downloaded at the cost of a short wait when the user backs out of whatever screen they're in.

But one of the consequences of this cavalier attitude to memory allocation is that even in these constrained systems, platform owners have mandated apps sit atop an ever-growing stack of shit that makes it all but impossible for developers to effectively understand and manage memory usage.


That is the open path to security exploits.


See xmalloc [1] and friends, courtesy of the Git project.

[1]: https://github.com/git/git/blob/master/wrapper.c


It depends on your recovery strategy. If your response to a `realloc()` failure is going to be to terminate the program, then you might as well assign the result back to the original pointer. If you're going to continue executing without allocating the extra memory, then yes, you need to assign the result to a separate pointer.


The realloc implementation in this blog is incorrect: the passed in pointer must not be freed if realloc is called with a non-zero length and returns NULL. This will cause a double free in correct callers.

As someone else pointed out, the example call of realloc is also incorrect.

edit: also, malloc is incorrect for three reasons: 1) sbrk doesn't return NULL on failure, 2) a large size_t length will cause a contraction in the heap segment rather than an allocation, and 3) sbrk doesn't return a pointer aligned in any particular way, whereas malloc must return a pointer suitably aligned for all types.


I fixed the double free. I must admit that the code was typed as I wrote the blog entry, and is horribly wrong :)


I fixed a crippling bug on another platform that was taking down whole servers, because someone was depending on a clever realloc to behave well.

This is implementation coupling at its worst. Don't do it.


Yes and no. The real error here would be to realloc without any geometric progression IMHO - ie. reallocating one more byte each time, which would behave well on Linux (except the libc call cost of course) but not on other implementations (such as some Microsoft's MSVCRT versions). Assuming realloc has no catastrophic performance impact is not something too daring.


This is really neat. Somehow I always assumed realloc() copied stuff instead of using the page table.

But say you have 4K page table size. You malloc() in turn a 2K object, a 256K object, and another 2K object, ending up with 2K, 256K, 2K in memory. Then your 256K is not aligned on a page boundary. If you realloc() the 256K it has to move since it's surrounded by two objects. When you do that, you'll wind up with the two pages on the end being mapped to multiple addresses. Which is actually just fine...Interesting...


The libc memory allocator does not simply hand out memory contiguously. In your example, the 256K block will end up being 4K aligned.

In fact, that's what the article already explains: the large alloc will just end up being passed through to the kernel, which only deals at page granularity.


What the article revealed to me is that there is no guarantee a contiguous block of allocated virtual memory will be backed by contiguous physical memory. In hindsight, that should be obvious.

But what does this mean for locality? Will I be thrashing the cache if I use realloc frequently? Do I even have the promise that malloc will return unfragmented memory?


Do I even have the promise that malloc will return unfragmented memory?

What do you mean by this? malloc returns memory that is contiguous in the virtual address space. It may not be contiguous in the physical address space, but that should be irrelevant for cache behavior.

Will I be thrashing the cache if I use realloc frequently?

I suppose. But if you use realloc, you should anyway ensure that you realloc geometrically growing chunks of memory (e.g., whenever you need a new buffer, you multiply its size by a constant factor like 1.2 instead of just adding an element at a time). As a result, realloc() should be infrequent enough that it normally doesn't matter.


And this is why OpenSource is awesome.


Agreed. Not sure why people are downvoting you. It blows my mind every time I think, "I wonder how <some program> works?" and I'm able to just "apt-get source some_program" and check it out.

Working with Linux and having the source (and the ability to change it) for entire stack all the way down to and including the kernel is liberating. As a programmer it feels like the entire world is open to me.

I guess that's GNU's dream brought to life, really.


In the original #define, the parameter is lower case "c" and the expansion uses upper case "C".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: