Hacker News new | past | comments | ask | show | jobs | submit login
Peter Norvig Joins Stanford HAI (stanford.edu)
469 points by azhenley on Oct 11, 2021 | hide | past | favorite | 105 comments



For anyone not in the loop, Norvig was the author of Paradigms of Artificial Intelligence Programming. This was a substantial contribution to the field of educational computer science literature, and helped to kickstart the idea that the way to learn is to read, not just write.

It's also one of the few AI books that isn't rooted squarely in Algol. It's written with fairly decent though not always portable Common Lisp, just like most Common Lisp books of the era.

It can be read here, in mobi or zipped HTML format: https://github.com/norvig/paip-lisp/releases/tag/1.1

Or here, in PDF: https://github.com/norvig/paip-lisp/releases/tag/v1.0

For a native web copy, abuse Safari Online's free trial. It's what I assume everyone else does when they want to read a niche technical book that O'Reilly put out but doesn't print another run of.

Depending on the day, the book ranges anywhere from $2 to $60 on Amazon, used, if you want a hard copy.

(Edited to fix the very butchered title that I wrote in error initially.)


> and helped to kickstart the idea that the way to learn is to read, not just write.

Can you expand on this? Nearly every field is learned with extensive reading.


Computer science, historically, has been a field filled with books that encourage you to write code while not particularly having you read much of it. K&R is a good example of this. PAIP went against this notion, and has you spend much of the book reading code.

Yes, this is taking the way learning is done in almost every case and field and applying it to programming. No, it still hasn't really caught on universally in computer science.


Has it not caught on because it might not align with people’s experiences?

I know for me, reading is only the first step in learning. Doing is far more educational, and often when things go wrong, I go back and read it again. So I wouldn’t argue that reading isn’t worthwhile, but at least in my experience reading is only 50% of my learning process, I might I’m even go lower than that, 20-40% range.

As an example, I read the Rust Book beginning to end, but it wasn’t until I started writing code with it that I truly understood some of the concepts, like move-by-default. When I read about move-by-default, I understood what it was saying, but I only grokked it after writing code and experiencing it—and at the same time realizing that it was the first time I’d actually worked with a language that had that as a fundamental piece of it.

I have a lot more examples like that, just happens to be a memorable one.


> K&R is a good example of this.

I don't see how K&R can be a good example (or a bad one or any kind of example at all) in the context of computer science education. Its purpose is to teach the C language to experienced programmers. It has no aspirations of teaching computer science.

K&R is not even about teaching you to code. It says in the preface to the first edition that it assumes you already know concepts like loops and assignment statements. It also says if you're a beginner, you need to supplement the book by seeking the assistance of someone with more experience.


K&R is still an example of computer science education. Programming languages are a subset of computer science, it's a book that tries to teach you a language. There's a substantial body of evidence in linguistics research that input is just as important if not more than output when it comes to learning a natural language; programming languages are no different.


K&R doesn’t even come up in a computer science education. The algorithms class I took 25 years ago now was mostly reading and writing algorithms by hand on paper.

I think maybe you’re referring specifically to learning how to program and not computer science at all. Because nearly every CS curriculum was dominated by a lack of hands-on programming and only in the last decade or so became more code-writing focused, which is exactly the opposite of the narrative you are painting here.


> K&R doesn’t even come up in a computer science education.

K&R was the main textbook for my first university CS class at Caltech in 1990. (Though, to be fair, the bulk of what I would call “computer science” as distinct from “mechanics of programming in the language used for the class” of that class was taught through other means, but still.)


The malloc exercise in K&R is CS.


Seems related to Alan Kay calling it a "pop culture". Or perhaps even "punk", meaning it's not learnt from the classics, or some "Elements" text, you just do it.


> Yes, this is taking the way learning is done in almost every case and field and applying it to programming. No, it still hasn't really caught on universally in computer science.

Man, it's almost like a field accidentally discovered a better way of working and other people are slowly trying to make them more normal. Learning by fast feedback on individual action is better than learning by reading and not acting.


It's not like that. "Learning by doing" predates language much less writing. Apes do it. Many animals do I'm sure. That has never gone away even from the most stuffy old elitist realms of academia, has it?


Hmm. Does language itself come from inventing a way to express what one wants to express? What would Chomsky say? https://youtu.be/hdUbIlwHRkY


That might explain why so many people learning a novel programming language start, but nearly never finish, writing a book or tutorial about it.

For example, there are far more incomplete Common Lisp tutorials than there are working / fleshed out code examples on GitHub. The endless unfinished tutorials and "books" are suspiciously similar, and nearly always give up at the same not-yet-practical level.

The same goes for other technical subjects. Raspberry Pi is another area where you can find different versions of nearly the exact same incomplete tutorials everywhere.


This is because there's a point where you can do something but you don't know how, less even how to explain it. If you ever mentor a more junior engineer this comes up all the time, like "yeah I have no idea why you broke the build, we can fix it together but I am not sure I can step by step explain all my actions".


I find programming is fairly analogous to math, music, classical art and any other endeavor with both complex analytical component and complex practical component - you need both to study examples and prior art, and do lots of exercises yourself (unless one is genious/savant in which case pedagogy conventions seldom seem to apply).


All the code in TAOCP is read-first, since you're not readily going to type it in and run it at all.


This just shot PAIP to the top of my reading list. I’m going to go grab it now.


I hope you enjoy it! I got a lot from checking it out.


I assure you he did not kick start that idea in computer science.


Who did? (If anyone)


I doubt anyone really, a lot of results and major work in computer science came from mathematicians and academia where reading had long been a primary method of teaching. There is a long and rich history of great papers and text books in early computer science.

Even getting to the more applied side of things, books are common. Perhaps the most well known one https://en.wikipedia.org/wiki/The_Art_of_Computer_Programmin... predates Norvig by several decades (not that I think Knuth kick started the idea either). Yes it does have exercises after each chapter but these are far less than the very dense extensive reading content including application of algorithms (and questions are obviously optional, and in the typical style of math text books).


TAOCP uses a significantly different methodology of teaching, and it's also very explicitly aimed at a significantly different market of programmer and in a significantly different area of endeavor within computer science.

It's a great work (I'd be lying if I'd said I'd poured into the entirety of it extensively, but I've spent weeks on certain portions, like the MMIX fascicle and Sorting and Searching), but PAIP has a significantly better claim at "kickstarting" the "read code instead of write it" trend, because in TAOCP the code doesn't take center stage, and it was never the point for it to (hence the creation of MIX & MMIX).


I didn't say it attempts the same thing, I said it's an example (just one of many) of learning by reading in computer science that long predates 1992.

Pointing to examples (K&R) advocating learning by writing is not evidence that learning by reading was not widely known and used in computer science. That's a logical fallacy.


I edited my comment pretty significantly to clarify; you might want to give it another look. Putting this comment here to avoid the "Wow, what an irrelevant comment" problem. You might find your comment, while a decent literal reading of my initial comment, might not hold up to the new one.

I will point out, though, that I never said anything along the lines of "was not widely known [...] in computer science." What an absurd thing to accuse a comment of saying! I do insist that it wasn't widely used, though, and TAOCP, a series that's infamous for not actually being read often, isn't a great example of it being widely used, even if TAOCP did fit the criteria (which in my updated comment above, I contest).


> I edited my comment pretty significantly to clarify; you might want to give it another look. Putting this comment here to avoid the "Wow, what an irrelevant comment" problem. You might find your comment, while a decent literal reading of my initial comment, might not hold up to the new one.

I did not find that. This is the first paragraph of your edited comment:

> For anyone not in the loop, Norvig was the author of Paradigms of Artificial Intelligence Programming. This was a substantial contribution to the field of educational computer science literature, and helped to kickstart the idea that the way to learn is to read, not just write.

It's still wrong. It did not kickstart that idea in computer science.


They said:

> "Computer science [..] filled with books that encourage you to write code [..] PAIP [..] has you spend much of the book reading code."

You replied:

> "There is a long and rich history of books in early computer science."

which seems to be missing the point, it's not reading books they are talking about, it's reading code as something to learn from, compared to learning by writing code, as the change which PAIP kick-started. "no you're wrong" is a low quality rebuttal, even moreso when it's backed by nothing more than the "assurances" of a throwaway account.


> has you spend much of the book reading code.

I went back to the original post that said it was edited, rather than that one. If it was there all along and I missed that part about reading code specifically then that doesn't really change what I wrote at all. TAOCP has vast amounts of code you are expected to read and understand (pseudo assembly and a fairly rigorous algorithmic specification language even if it may not be a "real" language).

But reading real code has long been a "thing". Why do you think UNIX and derivatives were so popular and widely used as teaching aids in universities in the 70s and 80s?

> which seems to be missing the point, it's not reading books they are talking about, it's reading code as something to learn from, compared to learning by writing code, as the change which PAIP kick-started. "no you're wrong" is a low quality rebuttal, even moreso when it's backed by nothing more than the "assurances" of a throwaway account.

I don't think it's worth getting too upset over. The "assurance" is a figure of speech, not appealing to my authority. And I don't see why you're getting calling out a low effort response because it is in response to a low effort claim. I didn't think it required anything more.


In throwawaylinux's defense, they've been using the account consistently for three months, and have a sizeable portion of karma (roughly (* 3 x) my own current 222, and (* 40 x) my own two days spent here). I believe, if I'm not mistaken, enough to use every point-based tool on the site (flagging, downvotes, colorbar). It's perhaps less than fair to wipe away a person who seems to be giving a good-faith effort like that, even if they're ultimately wrong about something.

I completely agree with everything else you said, though.


"When was the last time you spent a pleasant evening in a comfortable chair, reading a good program?" -- the famous opening sentence of the column on Literate Programming by Jon Bentley with guest Don Knuth. Communications of the ACM, Volume 29, Issue 5, May 1986 pp 384-369 https://doi.org/10.1145/5689.315644


Norvig presents complete programs for the reader to read and make changes to, instead of small snippets to practice with. You read more trying to comprehind the program.


Also one of the two authors of Artificial Intelligence: A Modern Approach, which is probably the most commonly used intro AI textbook.


I would have mentioned that, but it's mentioned in the Stanford post and is also less interesting from a "You Should Know" standpoint on a website built on a Common Lisp-inspired Lisp. Not to mention that the book isn't as good as PAIP.


> Not to mention that the book isn't as good as PAIP.

Norvig would disagree -

http://www.norvig.com/Lisp-retro.html -

----- As an AI text, PAIP does not fare as well. It never attempted to be a comprehensive AI text, stressing the "Paradigms" or "Classics" of the field rather than the most current programs and theories. Happily, the classics are beginning to look obsolete now (the field would be in sorry shape if that didn't happen eventually). For a more modern approach to AI, forget PAIP and look at Artificial Intelligence: A Modern Approach. -----

But I would highly recommend reading PAIP. I felt that some important examples of classic AI (like SHRDLU, not to mention Eurisko) could be included, but it's still really good.


I'm aware of what he said on it, and I don't disagree with what he said, but you're misrepresenting him by leaving the preceding two lines out.

«As an advanced Lisp text, PAIP stands up very well. There are still very few other places to get a thorough treatment of efficiency issues, Lisp design issues, and uses of macros and compilers. (For macros, Paul Graham's books have done an especially excellent job.)

As an AI programming text, PAIP does well. The only real competing text to emerge recently is Forbus and de Kleer, and they have a more limited (and thus more focused and integrated) approach, concentrating on inference systems. (The Charniak, Riesbeck, and McDermott book is also still worth looking at.) One change over the last six years is that AI programming has begun to look more like "regular" programming, because (a) AI programs, like "regular" programs, are increasingly concerned with large data bases, and (b) "regular" programmers have begun to address things such as searching the internet and recognizing handwriting and speech. An AI programming text today would have to cover data base interfaces, http and other network protocols, threading, graphical interfaces, and other issues.»

While yes, it aged poorly as an AI text, and excellently as a Lisp & AI programming text, it's a better book than AI:AMA, even if ignoring that it's based around a better language.


I don't disagree with his assessment, exactly, but it leaves out that PAIP is just great as a look at the craft of programming, by example, even if old-school AI is not your main interest. See how he breaks down problems, develops solutions incrementally, expresses them in code, suggests further work. It offers another look at themes from SICP.


> but you're misrepresenting him by leaving the preceding two lines out.

That was the only place where he compared PAIP and AIMA. I guess he considers these books serving purposes different enough so comparisons in other areas have less sense. Back to where we started, PAIP isn't universally considered a better book, even though it's good.


"12. We must resist the temptation to belive that all thinking follows the computational model."

Hnnng. This is the whole thing. Turing machines are limited. Data (inputs from humans) have more power.


> forget PAIP and look at Artificial Intelligence: A Modern Approach.

I've read and enjoyed that book 10+ years ago. Haven't been following AI/ML since then. Is it still a "modern approach"?


On the one hand, the fourth edition includes an introduction to deep learning that's as up-to-date as it's possible for a printed book to be. On the other, it's placed late in the book, and many people these days would skip straight to the neural nets. Up to you. I think it's valuable for condensing an incredible amount of stuff in an accessible unified introductory way.


Right, if your goal is to understand the broad field of artificial intelligence, AIMA is great. If your goal is to understand ML in particular, then there are a bunch of other more applicable books (Deep Learning, Learning From Data, probably a bunch of more recent ones that I don't know because I haven't kept up super recently)


Well, he didn't say what kind of book. My understanding is that PAIP is treated more as an advanced programming techniques text than an AI text. I imagine that you have good competitors for AIMA as an AI text but no good competitors for PAIP as an advanced programming techniques text. (At least in the lispy languages world, I know of none other -- maybe except for the less ambitious On Lisp?)


Hey there! I'm one of the folks who's worked / working on making PAIP readable online. The Safari version is captured in the epub version, so no trial is needed.


Yes, I know. That's why I pointed out that zipped HTML was available, but that's not quite a native web copy! Especially given how the zipped HTML in the epub format is usually presented (a million tiny HTML files in a single directory).


Artificial Intelligence modern approach is also a great book.


It's ~500 pages vs. ~2.5k pages...

the former already seems like a big project - the latter sounds impossible. (I'm assuming you're not skimming and actually doing the problem sets)

Is that in rambling prose? Dense math?


Norvig is also the guy in a colorful shirt you can spot in many Google TechTalks!


"Joins Stanford HAI" is correct; "Leaves Google" is not right–I'm keeping my Google badge, but will spend most of my time at Stanford.


Hey Dr Norvig, really enjoyed your podcast episode with Lex Fridman a couple of years ago. Once you get settled in it would be great to hear an update with how its going, some color around the background and objectives of the program and maybe just riff on the subject for a bit. Thanks!


I think sadly Fridman has since left the path of conducting interesting interviews with accomplished AI researchers and now caters to a kind of vapid pseudo-philosophical TED-talk crowd.


I mean, there's no question that he chose to branch out from strictly hard-core AI researchers. And I don't watch every episode, so I may have missed some of the "bad" ones, but from what I've seen, most of his interviewees are still respected / credible scientists, with a smattering of "other" mixed in here and there. In the past month or so he's had Jeffrey Shainline[1], Travis Oliphant[2], Jay McClelland[3], Douglas Lenat[4], Donald Knuth[5] and Joscha Bach[6] as guests. That's a pretty impressive group, IMO.

[1]: https://www.nist.gov/people/jeff-shainline

[2]: https://en.wikipedia.org/wiki/Travis_Oliphant

[3]: https://stanford.edu/~jlmcc/

[4]: https://en.wikipedia.org/wiki/Douglas_Lenat

[5]: https://www-cs-faculty.stanford.edu/~knuth/

[6]: http://bach.ai/


Yea I think I was overly harsh in my comment, see sibling reply for why this 'other' category gets me so riled up. If you ask me Joscha Bach is also somewhat in the category I mentioned, but I see your point.


Interesting. When I look at Joscha's background and work[1][2][3], he seems pretty credible to me. Is there anything specific he's said/done that puts him in your "other" category?

[1]: https://en.wikipedia.org/wiki/Joscha_Bach

[2]: https://scholar.google.com/citations?user=Q_yeuCUAAAAJ&hl=en...

[3]: https://www.amazon.com/Principles-Synthetic-Intelligence-Arc...


The only thing I know him from is his CCC talk, which is definitely interesting, even dazzling for all the ideas it ties together, but in the end it isn't really presenting anything new, so to me it is a bit of intellectual popcorn, like a TED-talk. I don't know anything about his actual research (which I think is unrelated to most of what he talks about) even though I did my PhD in an adjacent field. I believe as a researcher/teacher he is not in the same category as Peter Norvig and some of the other people you mentioned.


Gotcha. Sounds like we may have a difference of perspective. I am not familiar with the CCC talk you speak of, and am mostly familiar with Joscha's work (to the extent that I am, which is not "deeply" so) from his work on MicroPsi[1].

[1]: http://www.cognitive-ai.com/page2/page2.html


I knew about Psi theory, from Dietrich Dörner, but not this.


I'm totally not an expert on this, but my understanding is that Bach's work on MicroPsi is a follow on / extension of Dörner's Psi theory.


Joscha is actually very accessible on Twitter (at least in replies to his tweets). Might be work picking apart one that sits funny with you to test your understanding of his positions.


What interviews make you think that?


There were several that gave me this impression but Eric Weinstein and one about UFOs (forgot the name of the interviewee) come to mind.


I'm guessing you are referring to David Fravor[1] when you say "the one about UFO's".

I dunno. I'm a skeptic by nature (not just of UFO's, etc., but of almost everything) and I watched that episode and thought it was good. Fravor seemed like a sharp, knowledgeable, down-to-earth guy who was simply stating what he experienced... and went to great lengths to be clear that he wasn't necessarily positing that what he saw was caused by Little Green Men from Mars.

FWIW, I don't believe that intelligent aliens are visiting Earth, although I do believe that it's likely that there (is|was|will be) intelligent life elsewhere in our universe at some point in time. Given that bias, I didn't find anything particularly objectionable in the Fravor episode. But perspectives vary, of course...

[1]: https://lexfridman.com/david-fravor/


It's ok man, it's not a religion. You can skip those if you don't like them. He's also had James Gosling, Don Knuth (x2), James Keller (x2), Brian Kernighan and a bunch of other legends.


You are right of course, but this whole phenomenon really rubs me the wrong way. There is a whole host of podcasts now that provide a large audience to borderline crackpots. I believe it actually pushes people who are maybe otherwise quite intelligent to adopt such theories, because of the large audience that can now be reached with such stuff. Another good example is Avi Loeb and the Alien-Omuamua theory, explained quite well here: https://www.reddit.com/r/slatestarcodex/comments/o1dhlf/comm...


One great reason to come to HN is that you can find a thread on any tech personality, and semi often they'll just show up in the comments.


I almost answered "who are you to talk about P.N." to his own comment :cough:


We're quirky like that.


After reading and running his code etc, it is a shock to see him turn up here.


Why, do you think he doesn't use the internet?


> norvig.com is protected by Imunify360

> We have noticed an unusual activity from your IP and blocked access to this website.


A huge fan of those string algorithms and Those notebooks. Real craftsmanship.


Big fan of your spellchecker


Nice Peter.


He created one of the first courses on Udacity, Design of Computer Programs. It's free and quite good, from what I hear.

https://www.udacity.com/course/design-of-computer-programs--...


Yes. Very good, & also challenging :) Highly recommend


Agreed, this class was a masterpiece. Changed the way I write software


While we're on the topic, one of my favorite blog posts of all time is his "Teach Yourself Programming in Ten Years" https://www.norvig.com/21-days.html


in Ten Years

That sounds about right. Whenever I have to revisit code I wrote more than 3+ years ago I tend to think "Why the hell did I do it that way?"

Occasionally, now knowing the trend, I'll even comment in an apology to my future self. When I encounter those past comments my general sentiment is something like "yeah, thanks for the spaghetti asshole, would half a day to clean this up have really been so hard?"

And sometimes I remember the circumstances of that spaghetti, boiled around 2:00 AM, and remember "nope, wasn't time". Even though now I can do it better and faster.


Same here, I was inspired a lot by this post all these years. I really want to take this chance to say than you to Peter.


It's funny you mention this, as there is a big thread above that discusses his contribution to learning by reading, and his second point here is "Program. The best type of learning is learning by doing".


What did Norvig do at Google, anyone knows?


I ran Search for 5 years or so; then ran all of Research for the next 5; then had an increasing smaller portion of a huge growing Research initiative. This past year I enjoyed mentoring startups in ML through the Google for Startups program. But m0gz got it pretty much right.


Sorry for a snarky comment, but doesn't that cover the time when HN started noticing, that search stopped returning results for the query that was requested, and instead started to be "too clever" about it with no way to override?


The time period he's talking about is 2001-2006. HN didn't even exist then. That was when Google was basically like magic and let you find stuff you never knew existed.

HN started complaining about Google being too spammy around 2008, and then about results getting too clever around 2012 or so.


This was not clear because one of the slices did not mention the span it lasted.


Did your job require you to code?

How did you manage to keep sharp at coding?


He was their first Director of Search Quality, and then switched to being Director of Research. IIRC he had a VP over him when I left (2014), but was still largely calling the shots in Research. Google Research had some very large wins come out of them in the mid/late 00s - their speech recognition and machine translation programs came out of that.

AIUI Norvig was also instrumental in Google's research philosophy, which is to embed research teams alongside the products they're developing rather than having a separate research lab that throws papers over the wall for later implementation. Somewhat ironic, given that he ended up heading the dedicated Research department, but Research was viewed as sort of an incubator whose successful projects would be "adopted" by some other product team. He's the reason machine-learning is pervasive at Google and ordinary SWEs use TensorFlow, rather than it being the sole province of Ph.Ds.


I saw that others, including he, replied, but, fun fact, this snippet has been on his site for years:

"Note to recruiters: Please don't offer me a job. I already have the best job in the world at the best company in the world. Note to engineers and researchers: see why." (sic)


Probably the same thing such folks do elsewhere: allowed some very smart people to not step on the rakes quite as often, and be even smarter by providing perspective and advice.


'One way to think of AI is as a process of optimization — finding the course of action, in an uncertain world, that will result in the maximum expected utility"

This specifically reminds me of reinforcement learning.

"Now that we have a great set of algorithms and tools, the more pressing questions are human-centered: Exactly what do you want to optimize? Whose interests are you serving? Are you being fair to everyone? Is anyone being left out? Is the data you collected inclusive, or is it biased?"

This is like a meta-view of automation. When tasks were automated, people were able to do higher order thinking. For example, when people no longer had to do menial excel spreadsheets work, they were able to glean more meaningful insights from the data.

"The next challenge is to reach people who lack self-confidence, who don’t see themselves as capable of learning new things and being successful, who think of the tech world as being for others, not them"

This is the same insight Ajay Bangha (CEO of mastercard) made on giving financial access to the Next Billion Users.


Big get for Stanford


The Pentagon software developers with a Kindergarten grade education in Ai can route through Stanford as part of their professional development and climb the career ladder.


Made a few million and can afford to be a professor again. :). Academic life is the best life.


Congratulations Peter!

I was first introduced to Mr. Norvig when taking a MOOC course (Intro to AI). There I witnessed his approach to problem solving and his clever & elegant solutions. I couldn't help but be in awe.


Wonderful for the students who he will teach. Kind of off topic, but his positive attitude in this short interview towards training AI workers reminds me of the same positivism in parts of the new book AI 2041. I also like the focus on training practitioners. I am 70 years old, but I still look forward to work every morning as a deep learning engineer.


Hum, We talk about the same Peter Norvig that is an expert in cetaceans?


Current title ("Peter Norvig Leaves Google to Join Stanford HAI") seems to overstate slightly. According to Peter: "I still have my affiliation with Google, but will spend most of my time at Stanford."


Ok, we've taken 'leaves' out of the title.


Do you have a link?


Wow, the title is so wrong...


I suspect Peter should have left Google long ago.

Google's approach to ML/AI probably is not what Peter really liked, judging from his past writings and books. There is certainly a strong sense of symbolic and rule-based intelligence, based on my limited reading and even more limited understanding.

But I guess Google paid very well... In reality, Google probably should sponsor Peter and let him work at Stanford...


Given that Norvig was one of the authors of "The Unreasonable Effectiveness of Data"[1] and his famous statistical spell checker[2] are both about showing how rule and symbolic intelligence doesn't work I don't think your thesis is quite correct.

[1] https://static.googleusercontent.com/media/research.google.c...

[2] https://norvig.com/spell-correct.html


I think they referred to him internally which makes sense. I follow him on LinkedIn and most of the posts are about GCS AI services, which is external facing.


Basically sales pitches...


When I was at Google I never heard Norvig’s name in connection with anything. At any internal AI conferences Jeff Dean would be present, Kurzweil, etc., but never Norvig.


Rule based and all was long ago before data was so much abundant. Or in other words before big data era.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: