Hacker News new | past | comments | ask | show | jobs | submit login
Publishers withdraw more than 120 gibberish papers (nature.com)
102 points by sethbannon on Feb 25, 2014 | hide | past | favorite | 53 comments



The papers in question were generated by the SCIGen paper generator. Jeremy Stribling, Dan Aguayo and I are the authors of that system, with Jeremy doing the lionshare of the work. The inspiration for SCIgen was TheSpark.com's high school paper generator, which ran from about 1999 to 2001. To call these autogenerated works gibberish is an affront to those of us who hand-assembled their context-free grammars!


This is great work. I'm curious about your motivations. Do you consider it a validation of your work that so many papers were published?


At the time, there was an arms race within the systems community to see who could flip a bigger bird to the organizers of the SCI spamference. David Mazieres and Eddie Kohler started it off when they submitted "Get Me Off Your Fucking Mailing List" [1] for publication. We wanted to one-up them and figured we could do so by submitting a random paper that might be accepted into the conference, allowing us to give an in-person talk. The plan worked up to a point, but when we asked the internet for $3k to send us to Orlando, the press and then the organizers of SCI caught wind and revoked the random paper they had previously accepted.

Everything since has been gravy. We're pumped that SCIgen is still a useful weapon against charlatans the world over (this now includes you, IEEE and Springer).

[1] http://www.scs.stanford.edu/~dm/home/papers/remove.pdf


That link is priceless. What does it remind me of? I have a half-memory of a fake scientific paper full of nonsense words and incomprehensible diagrams that was very amusing. It was a bit like this: http://img262.imageshack.us/img262/9581/waltz19pq.jpg

Anyone know what I'm on about?



That was it. Thankyou.


Also, the slides are pretty good. http://www.youtube.com/watch?v=yL_-1d9OSdk


Indeed, the chicken paper inspired the work of Mazieres and Kohler. They should have cited it, maybe that's why SCI rejected them.


It wasn't really funny until I saw the first figure, then I died laughing.


Conference proceedings (not to be confused with a handful of actual journals that just contain "proceedings" in their name) can be thought of like the paper programs you might be handed at a play. They're usually meant to be an accompaniment to the event rather than a stand-alone publication. Their intent is to collect contributions from conference participants into a handy reference that attendees can use if they want more information, missed a talk they were interested in, etc.. Extremely high quality proceedings are peer-reviewed and might eventually evolve into a text-book. This level of quality is relatively rare though.

Most conference proceedings are not peer-reviewed and many aren't even read once by the conference organizers/"editors". This usually isn't thought to be a big deal because it is assumed that, were a submission truly fraudulent, the submitter would inevitably be outed when they give their talk. However, some conferences make submitting an original paper for their proceedings mandatory for acceptance. Getting funding to attend a conference is usually a lot easier if you're presenting, so researchers are motivated to speak even if they merely wish to attend. Speaking looks better on your CV to boot. Since conference proceedings are typically ignored, researchers generally won't put their best effort into these papers. When dead trees ruled, conference proceedings would be rapidly forgotten simply because you'd have a deuce of a time finding them if you somehow heard about a paper in it that you wanted to read! They simply weren't distributed to anyone besides conference attendees and maybe a library or two. Today, electronic copies have brought poor quality conference proceedings to a far vaster audience than they were ever intended for!

I'm disappointed, but not the least bit surprised that some people would submit something created by a paper generator to a conference. The blame for this falls squarely on conference organizers for requiring paper submissions but being too lazy to even read them once. If you don't want to read a stack of badly written papers, require abstracts only and, if you're still gung-ho to make some proceedings, welcome voluntary paper submissions.

This incident should not shake confidence in peer-reviewed journals one iota. It might, however, discourage conference organizers from requiring papers from speakers if they're not committed to reading them all.


>Since conference proceedings are typically ignored researchers generally won't put their best effort into these papers.

It's interesting, because this is completely the opposite in many CS fields (e.g. security). The top conferences carry much more weight than any journal. It's probably related to how fast the field changes.


HCI and graphics are other examples: a SIGCHI proceedings paper carries way more weight than a paper in one the HCI journals. And a SIGGRAPH proceedings paper is better than than a paper in something like Journal of Computer Graphics Techniques, although they've recently converted the SIGGRAPH proceedings into a special issue of ACM Transactions on Graphics (which is now more prestigious than other issues of that journal, because it's really a proceedings volume), mostly to avoid the confusion from other fields that don't consider conferences to be publications. Those two conferences actually have nearly exclusive prestige in their field, to the extent that they're the only top publication venue: if you're working in HCI or graphics you more or less need SIGCHI or SIGGRAPH publications to be considered a legit active researcher. They also tend to be more selective in peer-review than the journals are, with acceptance rates in the 10-15% range, while journals are usually more like 20-35%.

In AI I think it's a bit mixed, and maybe a US—European difference. Europeans tend to value journal publications in places like JAIR, JMLR, various IEEE Transactions, etc. more. Americans also value the best of those journals, but I get the impression that conference publications at places like NIPS, AAAI, and ICML are at least as good, and definitely better than a lower-tier journal.


One day when I was doing undergrad research, I helped one of my professors move to their new office[1]. He had an entire bookshelf of SIGGRAPH proceedings going back over 20 years.

[1] Before anyone brings up "undergrad research slave labor", I'll have you know I received a copy of "Gödel, Escher, Bach" for my troubles.


I was a paid undergrad security researcher that made double that any of other on-campus student job, where labor was cheap and abundant.


I'm not in CS, but I've heard of this. It's definitely aberrant. Some other fields change just as rapidly so I'd just chalk this up as a cultural difference.


In many CS fields, workshops are the equivalent of conferences in other fields such as physics. It is important to judge each field on how the primary peer-review publication is done, not by some abstract rule that is supposed to be the only valid standard.

As a side note, I've heard people from the humanities scoff at physicists and their belief that journal papers are worth mentioning. The only real publications are books.


Actually, there are workshops in many CS fields which are selective (not as much as big conferences, that's for sure) and as serious as conferences or journal on peer-reviewing.

The name "workshop" mainly means that the focus of the event is very specific and that the attendance is intended to be small so that real discussion with everyone can happen (contrary to big conferences where there are hundreds of people).

For example my last published paper was submitted to a workshop [1] and I had four different detailed reviews, which had nothing to envy to big conferences reviews I could have got for it. This is an example of a not very selective (because it has little submissions) workshop but which is very serious and would most certainly reject any computer generated paper.

Another example could be CHES [2] which is a workshop which grew big and is now the most prestigious conference (again, that means more prestigious than journals as in most CS fields) of its field (crypto hardware/side-channel security). In this case the name "workshop" sticks, but it is actually a full-fledged conference.

Yet another example of a small workshop [3] (still in the same field) which does serious peer reviewing (I had three looong and detailed reviews when I submitted my paper there). Last year they selected 8 papers for presentation the day of the workshop, and 4 of them were selected by a journal [4] for subsequent publication of an extended version (and after another small round of reviews since there were already the reviews of the workshop which were deemed serious enough by the journal's editors).

Maybe this is particular to the field of security (could it be due to the hacker subculture background it has in addition to its academic background?), but I guess my point is that it is not possible to generalize how the publication system works for all sciences, let alone for all academic research.

[1] http://pprew.org/

[2] http://www.chesworkshop.org/

[3] http://www.proofs-workshop.org/

[4] http://link.springer.com/journal/13389


Sorry, I was too general. I am very much aware that there are selective workshops in CS (I've been to some). I think I mostly wanted to point out some differences. My main wish was to convey that publishing venues are different, and the name of the venue is not always an indication of the type of publishing.

Thanks for pointing out that I made the mistake I was trying to correct.


Actually this is what I understood from your previous post. I was trying to strengthen your point with concrete examples and thought the best way to do it was in a reply. Sorry if that was unclear!


No problem. It was one of my pet peeves when others (typically physicists) devalued the work of the computer scientists at the university I was at just because they believed that journals were the only acceptable form of publishing.


My read on it is that the traditional approach in CS was:

1. Non-peer-reviewed "workshops" or "symposia". Place for position papers, work-in-progress, etc. Typically screened only for basic suitability for the event. In some cases authors may only submit an abstract of their topic, not a full paper.

2. Peer-reviewed but non-archival "conferences". Place to disseminate properly written up recent work in relatively short form [6-12 pages double-column], get feedback from other researchers, etc. Typically peer-reviewed by 2-4 reviewers, on the basis of a full paper (not abstracts like in some fields). The top 10-40% of submitted papers are accepted for inclusion (depending on the conference's selectivity), and then accepted authors typically have a month or so to revise their papers on the basis of reviewer comments, before submitting the final version for publication in the proceedings.

3. Peer-reviewed archival "journals". Place to wrap up a project or line of work with a publication that takes 3 to 5 conference publications, and integrates them into a full writeup of the project, tied into a nice bow for posterity suitable for archiving in The Literature. Peer-reviewed in the same way as other fields. Typically longer, 20+ pages.

But what seems to have happened is nobody really reads #3, since all the new work comes out in #2, the peer-review standards of #2 are already high enough for practical filtering, and #3 is years behind. And then people (especially Americans, especially in certain fields) just stopped writing #3 as often altogether. The view seems to be that tying several conference papers into a bow might be nice from the perspective of having a sorted-out archival literature, but at the present time, your peers have already moved on, and there are diminishing returns from tying those 3 already-published SIGGRAPH papers into a summary journal article, compared to working on new research for the next SIGGRAPH paper instead. It's even seen a little as "milking" your previous work: if you take your conference publications and turn them into a journal article, some people will assume you're doing it because you need the journal line on your CV for some kind of tenure/hiring committee, since otherwise why would you be spending all this time "upgrading" old conference papers into a journal, rather than writing new conference papers on new research?


Replying to self (too late to edit): One other reason it developed this way, I think, is because of the strong industry participation. Industry researchers often are willing to write a short conference paper: Usenix, SIGGRAPH, etc. But they usually (with exceptions) have little interest in writing up a 20-page journal article and going through a heavyweight revise-and-resubmit review process that might take a year. So the good industry research ends up mostly at conferences, and since some of the industry research is very high-profile (e.g. Pixar stuff at SIGGRAPH), it further cements the conferences' reputation as the place top research appears.


Yeap. Industry has to ship something useful.


That's because it takes a long time to publish in a journal (like 1.5 years or longer), and by that time the topic is old. Publishing in a conference allows you to bring fresh-from-the-oven results to your colleagues, along with the freshly-baked bread smell ;)


Journals are highly variable, at least in physics. I've experienced everything from acceptance after a week and publication within the month to pure hell that dragged on for over 2 years. The average is a few months and certainly not over a year! You'd better believe physics moves fast too. Preprints frequently gather dozens of citations before a journal editor even bestirs him or herself to send the submitted article out to referees! ArXiv is what everyone reads daily for the latest stuff, but always with caution. There's crazy crank stuff and crap in there too unfortunately.

Preprints are a great way to distribute results quickly without waiting for the gears of the peer-review process to complete their grinding or for the right conference to come up. However, when you get good referees the peer-review process can really put your work through the acid-test. We frequently get comments back that prompt us to do great work (along with many that just piss us off with busy work...). When it functions properly, the peer-review process bursts your institution's local bubble of thought-patterns and reminds you of how other experts in the field might think about the same problem. Occasionally it catches big mistakes or spots huge redundancies.

China is starting to publish a lot, but the standards aren't there yet. Some of their institutions do fantastic work while others really need peer-review to keep them honest. There are, of course, dishonest individuals pretty much everywhere who will take advantage of any flaws in the system. How do disciplines that eschew peer-review deal with this? When viewing preprints, I unfortunately place far too much importance on authors I know and trust. Sources matter when you can't really trust every article you read. Without peer-reviewed journals, how do you avoid potentially ignoring brilliant research by people unknown to you?


If I'm dreaming, the ideal outcome of this would be that Springer and IEEE decide to actually apply some kind of scrutiny to the conferences whose proceedings they publish under their names. It's a pretty poorly kept secret that, while they publish some stuff that's top work in several fields, they also publish some total crap that is so bad it's inconceivable someone actually competent has spent even 20 minutes vetting it. But they seem not to care a lot. They (especially the IEEE) are coasting on a name produced through some good work over the years. Much of that reputation is well deserved, but you can only shove so much junk into IEEE Xplore before it loses the cachet it built up in previous years. If they don't shape up, it will reduce the attractiveness of IEEE and Springer for people publishing actually good work as well. Why affiliate your legitimate conference with publishers who have no quality control? The IEEE in particular is a nonprofit organization which is supposed to have a mission to advance engineering, so it's particularly problematic that it wouldn't be doing a better job.


> If I'm dreaming, the ideal outcome of this would be that Springer and IEEE decide to actually apply some kind of scrutiny to the conferences whose proceedings they publish under their names.

I'm sorry to break it to you but that won't happen. It would totally destroy the awfully good business model they have, which is one of the best engineered fraud ever.

Right now the situation is:

- Researchers, oh which almost 100% are paid by public money, do research and write their results in papers which they submit to journals / conferences / workshops.

- If the targeted journal / conference / workshop want to do peer review, their editors / committee is in charge of distributing the load, those people also are researchers (also close to 100% paid by public money).

- They offload the reviewing work to other researchers who hopefully are competent on the subject of the papers. Again, public money since the reviews are seen (and I think it's normal) as a normal part of the researchers' job.

- The selected papers are given to the publishers and often their copyright too.

- The publishers now put the papers online, put a paywall it front of them, and make private profits $$$$.

- On top of that, not many people do buy individual papers, most papers are rented by academic institutions for colossal amount of money (again, public money!). I'm really not kidding when I say "colossal".


I can see that narrative with Springer, but the IEEE is a nonprofit, the main professional society for large areas of computer & electrical engineering. It's not necessarily the best-run nonprofit, but they aren't trying to pump up a stock price or give an ROI to investors or anything. They do need some revenue to pay for their ongoing operations, but in theory they should also be very interested in ensuring that revenue-generating operations don't compromise their main mission and reputation (which is the reason to raise revenue in the first place).

I think part of it is that historically the IEEE has wanted to be relatively open to new fields of research, so if you have a few professors willing to vouch for a new conference, the IEEE is willing to sponsor it and publish the proceedings. It's quite decentralized and any of the various "societies" and "interest groups" within the IEEE can sponsor a conference without central approval... and it's also fairly easy to start a new interest group. That system depends on most active IEEE members being legitimate researchers, though, who are in good faith trying to put on real conferences. If that's true, it makes sense for the approval policy to be relatively permissive. But some people have been taking advantage of this and producing vanity conferences, so this system probably needs to be tightened. Either the IEEE needs to exercise more control over conference approval, or they need to be more diligent in kicking out people who abuse the trust of the permissive system. The ACM (the main professional society for computer scientists) is somewhat more conservative about sponsoring new conferences, though that does occasionally produce the opposite problem where it takes a lot of legwork (and sometimes politicking) to get a new one accepted.


I sincerely hope you are right.

However, as pointed out in other comments, most researchers know what are the good and bad venues in their particular fields. This means that in reality, who is the publisher of a journal or of the proceedings of a conference matters less than the reputation of the conference or journal itself (sadly, that may be less true when it comes to recruitment in academia) which is given by the quality of the papers published in there. If I had to chose fights I guess it would be for the generalization gold open access and for the end of bibliometrics before trying to get these publishers more involved in what they publish (since we don't really need them anymore).


I think that's true within a field, yeah. But outside the really top conferences, stuff as famous as SIGGRAPH, my experience has been that publisher affiliation is given significant heuristic weight, especially when people slightly outside a field are evaluating a conference they aren't personally familiar with (either researchers in other areas, or administrators/deans/etc.). A non-HCI researcher (even another CS researcher) might wonder, is this HCI conference I've never heard of any good? If it's an ACM HCI conference, many people will automatically attribute to it some baseline level of credibility, while if it's a completely independent conference, you start out more in the hole having to justify that this conference is legit. A top conference like SIGGRAPH could probably get away with not being ACM (though the ACM affiliation probably helps with deans/etc.), but more specialized, mid-tier conferences have a harder time going DIY.

I don't think that should always be the decisive factor, just that it's an issue. Last year I was proceedings chair of a conference that I did help move out of ACM, which it had been affiliated with and published with for the previous few years (http://www.fdg2013.org). There were various reasons for it, one being that being ACM-affiliated imposes significant bureaucracy and some financial costs that have to be recouped in registration fees (they're super-paranoid about the event insurance, for example). Another is that the conference is in a field (game studies, game AI, etc.) where we hope non-academics will also read the papers, and being open access would make that a lot easier. Being in the ACM Digital Library doesn't prohibit authors from putting up non-paywalled PDFs on their own sites, but it does prevent there from being an official, centralized open-access proceedings. But I would say we did lose some previous participants as a result of the move. For some people, moving from "an ACM conference, with proceedings archived in the ACM Digital Library" to "an independent conference, with proceedings published by the Society for the Advancement of the Science of Digital Games" is a deal-breaker: they see it as a drop off in prestige, from something they can list on their CV and use for their tenure case to something they can't. This varies completely based on university and field, of course, but a number of people told us losing the ACM publication was a big deal for them, and would lead them to publish elsewhere. One researcher from Brazil even said that his travel funding depends on the conference being sponsored by a major professional organization (IEEE/AAAI/ACM/etc.), so if we went independent he wouldn't be able to come anymore. I still think it was the right move on balance, but I can see the situations some of these people are in as well.

There's also a little bit of an archival question. The ACM DL will probably still exist in some form in 50 years. Will an independent conference's archive still exist in 50 years? To tackle that question we're investigating being archived by a new open-access archival initiative from the University of California (http://escholarship.org). There's a possibility this may also mollify some of the people upset about losing the "ACM" publisher line, since they could instead list a "University of California" publisher line, which is also a well-known institution. Another conference I've published in has taken that strategy, with the University of Michigan (http://quod.lib.umich.edu/i/icmc/). I think this might be an attractive way for conferences to keep the institutionally-archived cred while getting out of the paywall mess. However it does require these archives to then do some decent vetting so they don't archive fake conferences and lose their credibility.


Thanks for your detailed response. I knew that ACM and other were a factor in many heuristics for evaluating conferences, but I had no idea to which extent! This is really sad…

Congrats for your move to open access with FDG :-).

It's really sad that putting the papers on arXiv + hosting a web page on the conference website saying "these versions of these papers have been accepted at this conference after peer-reviewing etc." is not sufficient. There is a real problem here, because clearly the quality of the results would be exactly the same as behind a paywall hosted by the ACM.

Thanks for the pointer to eScholarship too.


I tried to read the papers but they cost $35.95 each to download.


There are a couple of examples here: http://pdos.csail.mit.edu/scigen/#examples

I found them side-splittingly funny, at least for a page or so.


"Cryptographers must usually enable the partition table."

These are actually really really funny.


It's PREMIUM gibberish. :-)


It's a cruel world.


It may not be long before passable high school and college essays can be undetectably generated by software. I wonder what we'll do then.

At some point, I guess we'll have writer software that pretty much generates nicely articulated narrative for whatever it is we wish to say. "Write up something about global warming. Make it borderline skeptical, quote the Concerned Scientists and so forth. You know what to say."


It may not be long before passable high school and college essays can be undetectably generated by software. I wonder what we'll do then.

If we fed such a program good-quality reference books, we could massively improve the quality of most Wikipedia articles.


Happily (?) high school and college essays (and probably potential scientific publications) will also be read chiefly by software in the near future, just as GRE essays are graded in part by software today. Whichever side has the best algorithm wins.


This is pretty much the case now: ghosted essays are marked by ghosted professors.


That would mean that a computer had passed the Turing test which would be...quite an achievement.


Not at all - you aren't engaged in conversation with a paper you are reading.


Hrmmm. Yes, that's true. I stand corrected!


"...Labbé, who has built a website where users can test whether papers have been created using SCIgen..."

It's so lucky that he's built a tool to check whether a document is "computer-generated nonsense". God forbid that the publishers have to actually read the papers to check.


This is quite the deja vu: http://www.physics.nyu.edu/faculty/sokal/

As I remembered it, their "fake" (automatically generated) paper was accepted into a conference. They then tried to submit a second paper that was also auto-generated, and that's when they got caught. I may be confusing the two stories, though.

In any case, it's very easy to generate your own postmodern gibberish papers - every time you visit this page, it refreshes with a new one: http://www.elsewhere.org/pomo/


Sokal's paper was carefully hand-written, so a somewhat different situation. His goal was to show that if you used scientific jargon in a highly metaphorical way that flattered the ideological views of the editors but was pretty hilariously shallow (e.g. casually analogizing the axiom of choice in mathematics to pro-choice politics), it could get accepted. He didn't aim to show that you could auto-generate such a paper, though, merely that a human could construct one.

Unlike the papers here, Sokal's paper is also not really gibberish. It's well-written and coherent in a certain surface-level sense, it's just that what it's arguing is nonsense. It strings together coherent paragraphs out of coherent sentences, in an argument that goes somewhere and which you can follow... but the argument itself is ridiculous, put together out of hilarious non sequiturs and puns masquerading as reasoning. The SciGen papers are gibberish on a much more literal level, in that their paragraphs don't actually make sense, their abstracts don't match the subject in their titles, etc.


Hm, yeah, I'm definitely confusing them. In any case, I saw this headline and thought, "where is the [2004] tag?", because it was literally exactly the way my friend told me the SCIGen and/or Sokal story back then.

Quite a shock to see that this was 2014.


Previously, I have understood this was a concern mostly with the so-called "open-access" journals.

In 2013, the Science Magazine went to some extreme lengths to discredit these journals, that more often than not seem to lack any form of scientific peer review. A fake paper, published by a made-up person working for a non-existent university, purporting to describe a cancer drug extracted from lichen, had experimental flaws that should be blatantly obvious to even an undergraduate of the field.

Of the 304 papers submitted, 157 were accepted and only 98 were rejected, with the rest not eliciting a timely reply. Only 36 papers received reviewer comments about the scientific shortcomings, and in 16 of those cases the paper was accepted anyway. Source: http://www.sciencemag.org/content/342/6154/60.full

It seems pretty shocking that even well-known publishers like IEEE and Springer are not alien to such obvious lack of peer review. Does it actually serve anyone's interest, that we have so many parallel "scientific" journals that publishers can't even find competent reviewers for a large number of the papers published? Does anyone follow the publications? Or are they all just a way to rip off institutional subscribers like universities, who must subscribe to all possible journals and pay outrageous sums to the publishers?


The paper you cite is a load of crap. And its author John Bohannon is a sell-out. Or at least I hope so for him, because if he compromised his intellectual faculties and his scientific reasoning for less than a very good place in hell, he is mad.

This paper makes and obviously false hypothesis: that the quality of a journal depends on it being open access or not, while clearly it depends on it being seriously peer-reviewed. The real question is: among peer-reviewed journals, are those which have an open access policy worse, better, or the same as the others? (My bet is that they are the same, since scientists/researchers doing the peer reviews have no more incentive to do it better in a case or another).

Bohannon shows his results in a way which can't say anything about that, mainly because he is comparing two things (open access and closed access journals) but only studied one of them and makes assumption about the other. With this level of scientific hypocrisy anyone can make numbers tell anything they want.

What I take from this article is that Science is scared of what is coming. Nothing else. If they were honest they would have titled it "Who's afraid of open access?" and the content would mainly be "us".


To summarize the concerns I have with this study:

1. To prove that open access is worse than close access, you need to do experiments comparing both, not just do experiments on open access and say the results are bad. Try imagining a study using the same methodology to show that American journals are bad.

2. Even assuming that open-access journals on average are worse than closed-access journals, it does not follow that all open-access journals are crap. I think it is safe to say that 80% of all publications venues are crap regardless of their access policy. Fortunately, scientists usually do not choose a venue at random among open-access or closed-access ones, they pick those that have a good reputation. So to establish that open access is bad you should do so by evaluating the right venues.

To give another analogy, this is like looking at random emails in transit, observing that they are spam, and concluding that email is worse than Twitter, without observing anything on Twitter, or taking into account the fact that people are not reading random emails.

I think this study does exemplify the fact (well-known among scientists) that there is a huge mass of low-quality publication venues with little or no review of the submissions, but branding it as "open access is bad!" can only be explained by further political motives.


The latest discovery is merely one symptom of a “spamming war started at the heart of science” in which researchers feel pressured to rush out papers to publish as much as possible.

~Sounds about right


Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo?

http://en.wikipedia.org/wiki/Buffalo_buffalo_Buffalo_buffalo...


Sometimes, but at others: Buffalo buffalo buffalo Buffalo buffalo Buffalo buffalo buffalo. [1]

Similarly, buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo buffalo buffalo. [2]

________________________________

Notes:

1. C n v C n C n v

2. n v C n v v C n n v




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: