Hacker News new | past | comments | ask | show | jobs | submit login
Great Mathematicians on Math Competitions and "Genius" (2010) (lesswrong.com)
191 points by jimsojim on July 15, 2016 | hide | past | favorite | 81 comments



As a former "successful" mathlete (never did IMO or Putnam, but consistently made the top five in state competitions), I agree that competitions won't help you develop mathematical maturity, but it is interesting how how they are viewed in the wider world. It's been a number of years since my math tournament days, but people are always very impressed if I mention that I won math competitions in middle and high school. It is odd to me, because I don't really care about my math medals anymore and I've since accomplished things that I value a lot more, but there you have it.

Also, even though competitions won't help you develop as a mathematician, I still think it was a good experience for me to get out of school for a day and hang out with a bunch of other math nerds. That part of it was a lot more valuable than the competition itself.


> Also, even though competitions won't help you develop as a mathematician, I still think it was a good experience for me to get out of school for a day and hang out with a bunch of other math nerds. That part of it was a lot more valuable than the competition itself.

I haven't done math competitions, but I feel the same way about programming competitions. I don't think I'm a better programmer because of the competitions. But I do think spending 4 hours, 2-3 days a week in a room with other programmers who were also there voluntarily made me much better. It's not like I stopped focusing on my own projects or coursework and did this instead. It was just an additional 12 hours a week of practice and socializing.

Some problems were hilariously artificial too. I still remember one, where 10-year-old Suzie had a lemonade stand and had been keeping track of her profits each day. You needed to find the 3 consecutive days in which she had made the most total profit. The catch? It needed to run in less than 1 second and the largest case could (which means it will) be n=9999999. (I may be off by an order of magnitude).

Little Suzie had been running her lemonade for well over 27,000 years apparently, but would throw a tantrum if it took more than 1 second to compute her answer.


> I haven't done math competitions, but I feel the same way about programming competitions. I don't think I'm a better programmer because of the competitions.

While you may be correct with respect to actual day-to-day programming, I'm 100% sure that programming competitions make you a much stronger interviewer for typical software developer interviews.

I've never competed in programming competitions, but I've seen the types of questions they ask, and many if not most of the interviews I've experienced recently use the exact same kinds of questions. And in some cases, even the exact same questions!


That just makes me think that they are not doing interviews right either -- ranking based on how quickly you can whip up an algorithm for a tricky question.


I just conducted several technical interviews. Starting with a very basic question even for people with years of experience shows you quite a bit. It's not about getting it correct, it's can you get in the correct ballpark and you can evaluate what your doing.

Create a function that returns true or false if _.

Followed by far more hand holding that I was expecting, but most people got the idea fairly quickly. Only person who failed wanted to add a lot of print statements for no reason. And then did not respond to help.

IMO, good interview questions are such that most people pass. I mean your bringing them in so asking something you had to google is a waste of time. The goal should be to minimize false negatives while still removing some people.

The classic what is an Object, what is an Interface is just filler that tells you little. And very technical questions have ridiculous false negative rates. So, just softball an easy code problem, ask about background and fit, and call it a day.

PS: Remember you're interviewing people for a reason. When a good fit backs out you lose not just time but potentially a great coworker.


The PS is something people forget all the time. It's a learned skill to make interviewees feel like this isn't an adversarial meeting, but a mutual interviewing.

That said, you're right. Just this week I interviewed someone with almost 30 years experience, some of it dev, architect, some management, for a pure developer role, and gave him "Write a function(n) that takes an integer and gives you the nth number in the Fibonacci sequence." Got very angry and said he didn't know the algorithm, so I drew it on the board and explained it (red flag for someone who was a former "scrum master" not knowing what the Fibonacci sequence was), and he told me that it was arbitrary and academic and a waste of time. I asked him to do it anyway since it was to see his style and approach, and he got mad and left. Most junior devs solve this problem in ten-fifteen minutes, and I consider it actually more of an interactive ice breaker to get us both standing and talking rather than a stumper.


Are you sure that's really the hiring process you want?

Junior devs were more recently in school, which is the last time such knowledge is needed. If you have actual technical challenges relevant to your work, why not take that hint and shift the interview to something that might possibly come up on the job and that isn't trivial for a junior dev?

Instead of finding out what that candidate was good at, you just found out which academic trivia he doesn't remember, and that he has a low tolerance for irrelevant bullshit.


There were no trivia involved. I wrote the sequence and explained it - it's not hard to grasp. It's a simple for loop in the easiest solution and a simple recursive method if they want to show off.


I like recreational mathematics too. Is the Fibonacci sequence related to the job in any way? If not, I say it's trivia.

Look, someone looking for their first coding job is surely willing to walk you through CS101 assignments. It's a fine icebreaker if someone has no industry experience.

Someone more senior should be trying to have an entirely different conversation with you. Do you understand and value what they bring to the table that a junior dev doesn't? Will you adapt to take advantage of their strengths, or will you insist on following your process just because it's your process? Is there space for them to make a contribution or are you just looking for someone who will code up what you ask for?

Interviewing is a mutual search for fit. There wasn't one. Maybe that's because you weeded out someone who couldn't complete a freshman homework assignment with help. Maybe not.


It's not about getting it correct, it's can you get in the correct ballpark and you can evaluate what your doing.

Is it, now?

In many of these sessions you are unequivocally dinged for having a less-than-perfect answer by the time the bell rints. Or if there is some fuzzy acceptance standard in the back of their minds somewhere -- they certainly won't condescend to tell you what it is.

Instead, what you usually get is: "Uhh, hi. Here's a Google Doc. Can you type a fully working implementation X for me while I boredly watch? BTW I don't normally program in the language you're programming in and so probably shouldn't be doing this session with you anyway, as that will only work against you. But then again, it's not like I care -- I'm just doing this because they told me to."


> It needed to run in less than 1 second

I've always wondered how these competitions measure the program's runtime consistently. I guess the easiest way would be to specify the CPU the program will run on and use the bash `time` builtin, but that seems inconvenient for participants to obtain that CPU and controlling for the cache might be difficult (maybe a kernel module that clears the cache before the program runs and make sure the program runs fully in cores dedicated to that program and is consistent enough with memory accesses so L3 is comparable).

On the other hand they could just count instructions run, which would be completely deterministic, but might lead to some "optimizations" that don't make sense like using `rep movs` to zero memory instead of loops. Using a higher-level bytecode would have the same problems. They could also give different values to each instruction and possibly memory access, but even then students would be ignoring important real-world cache or pipeline optimizations.

But maybe I'm just overthinking this. This type of problem would be better suited for GPUs anyway.


The time limits are overkill except if you chose the algorithm wrong.


In fact, you can often get a good ballpark-estimate of what big-O runtime your algorithm needs to have, just from the time limit. This can work as a hint on how to solve the problem.


The other commenters are right, the time limits were a non-issue if you chose the right algorithm. It might be the difference between solving it in 30 years and 300ms.

I'm probably misremembering that specific problem, but they were typically very easy problems, but with a huge n and some small 'gotcha' to the scenario that allowed an algorithmic or dynamic programming trick. Not realistic at all, but they were lots of fun. All of the problems weren't like that, but those were the hardest to get.


In high school, I participated in "WYSE - Academic Challenge" (Worlds Youth in Science/Engineering) which was test taking competitions for a variety of topics. You'd take 2 tests out of 7 subjects, and there were individual test prizes, and combined school/team prizes.

I originally participated to hang out with other nerds, but wound up getting a few state competition medals in the computer science test. The multiple topic thing really helped cement my love for computer science/engineering. That was the most valuable part for me. I dont even know where my medals are even at anymore.


I was running a BaaS startup for a bit. While researching our competition-to-be I remember finding out that Parse's Kevin Lacker is a 2x Putnam winner. Couldn't help but have instant respect for him and the whole company.

Personally as a kid I was often "sent" to these competitions but never bothered to prepare for them at all. I stupidly assumed it was a pure IQ play. Now I regret not taking it seriously. Makes me have even more respect for people that do.


> Personally as a kid I was often "sent" to these competitions but never bothered to prepare for them at all. I stupidly assumed it was a pure IQ play.

I think this is the biggest disconnect between perception and reality of intellectual competitions (debate, math, chess, whatever). In reality, they require just as much practice to succeed as any other sport.


I'm from that world of math Olympics. Never reached IMO, only top places on regional competitions (math, physics, programming), and know a lot of people who been there and who never participated. Some won the medals, graduated with MSc from universities and gave up with science. Some were not so good in competitions, but became very smart scientists and great math teachers. There's no rule, and I don't think this system is just a selection of the best, discouraging children who did not succeed from choosing math.

It's just much easier and more natural to see the beauty of mathematics while spending enough time on learning, solving problems and engaging in competitions. Oh, yes, I'm still seeing it despite years in MIPT with some real math. There's a lot of fun in it. For example, we had "mathematical battles", when two teams had to present and defend their solutions and get score points from jury. It's also a very special and friendly environment, where you can meet people, connect to universities and build social network that will serve you for the whole of your life. I still have a lot of friends from summer math schools which I attended in early 90s. For many it's also a social lift, egalitarian by it's nature, where distinction between rich and poor is almost invisible (not like on the street or a schoolyard), it allows many children from small towns across the country to enter top universities and build successful career, not necessarily in science.

I will never blame this system for presenting math "in wrong way". It doesn't have to show the world of grown-ups. And, by the way, we never heard the word "genius" (except applied to Pushkin or Einstein).


Like a few commenters on this thread, I participated in a lot of math contests in high school. Never made the IMO but placed at/near the top in regional and statewide competitions.

On the one hand, my experience mirrors some of what the article talks about: I learned very quickly that the things professional mathematicians work on are very different from math contest problems. (I went to college intending to major in math, but switched to CS as soon as I took a semester of abstract algebra.)

On the other hand, the article seems to imply that many great mathematicians look down on math competitions for not giving an accurate portrayal of math as a career path. I don't see why that is an issue. My high school was a math magnet school, and 100s of students participated in monthly contests like California Math League. Almost every participant that I talked to in those days did math contests because they were fun, or because they were an interesting challenge. I never met anyone who said "I want to be a mathematician, and contests are clearly the first step on that road."

For me, math contests are like high school sports or drama or anything else. They appeal to certain subgroups of kids, they're fun and hopefully educational/useful in some way, and they don't have to be more than that.


There's definitely an argument to be made that disgruntled, great mathematicians act as a barrier to entry. Being exclusionary hardly attracts anybody to an already difficult field. Besides, who's to say that whatever they describe as "the one prescribed path" must be what works for everybody? I think the will and drive to study mathematics is less cookie cutter than that.


I was pretty good at math competitions (2x putnam fellow) but never that great at being a mathematician. But I still think math competitions are great experience. Winning at any sort of competition teaches you how to be persistent, how to work hard, how to recover from setbacks mentally, how to maintain focus for a long period of time, and how to gear up for critical moments where you need to perform.

For example, when I first went to the math olympiad summer program, I had trouble focusing on a single math problem that I had no clue how to solve for three hours straight. It's hard! The training program basically forces you to do that over and over, so I ended up learning a lot of how to focus for large chunks of time and do useful things to attack a problem that I didn't initially know how to solve.

I went into computer stuff instead of math stuff after college, and there's a lot of stuff I never used again. Algebraic topology, all the geometry theorems they don't teach you in high school, you name it. But the ability to work really hard on a single technical problem until you nail it, that's been constantly useful. Especially in startups.


100% agree that learning focus and persistence was a very valuable result of doing math competitions.

Completely random: If you're who I think you are, I still remember seeing your name on the list of perfect AHSME scores in 1998ish. I think we met briefly at an ACM competition in '03 (we played Mafia for a while in a big group, and were briefly introduced by Po-Shen Loh who was one of my ACM teammates.)


Yeah that sounds like me! ;-) Although by '03 I was in grad school; ACM stuff was probably '01 or '02.


You're right! It was '02, in Honolulu.


For me, this applies equally to programming competitions and white boarding algorithms in interviews.


I think some folks overdo the white boarding. But I believe it is needed - may be one or two problems on the whiteboard with the rest of the interview focusing on design & software engineering (metrics, resiliency, etc.).

I've seen folks who have a lot of coding skill on their resume fumble simple white board problems. I have fumbled simple white board problems when I have also interviewed (It was for a s/w engg position but I had spent the past several years in architecture and away from any real code - so it was expected).

The point is white boarding is ok if you have a well defined problem solvable in 45 mins and if it is just geared to assess your familiarity with code. I don't think its a reasonable expectation to come up with new approximation algorithms for NP-complete problems and solve+prove them on a white board in 45 mins.


I've seen folks who have a lot of coding skill on their resume fumble simple white board problems.

Solving a coding problem on a whiteboard tests your ability to solve coding problems on a whiteboard. That's a bias. It makes people who get nervous standing up and being the centre of attention less likely to pass the test. If coding on a whiteboard is a part of the job then fair enough, but if it isn't then you're introducing something to the interview that filters people out based on something other than their ability to do the job - and that means you're not necessarily recruiting the best person. I believe that's a good reason not to use whiteboard tests very often.


While that is in some ways fair, interviews inherently involve being the center of attention; people who do poorly at them because of nervousness will always have problems regardless of the format.

While it's true[0] that work samples are substantially better at evaluating candidates than informal interviews, they have their own downsides. For example, I have heard many people balk at multi-hour homework assignments as part of the interview process as too much of a time commitment to one company. In the end, any screening technique will be flawed. That doesn't mean that we shouldn't use them.

[0] http://bobsutton.typepad.com/my_weblog/2009/10/selecting-tal...


To be fair - we don't look for syntactic correctness of your solution. You miss a semi colon here and there - that's cool. You invent your own method/function to abstract out things like creating threads, communicating between process - that's fine (in fact we provide examples of these and say feel free to use something like this).

What we are interested in is algorithmic correctness. I think for someone who develops for a profession writing an algorithm on a white board shouldn't really be a big deal. Agree on the nervousness... I don't know a good way around it though... We normally do interviews on the phone using collabedit so the candidate can sit in their own comfort zone. I also make it a point to mute my phone and not to talk unless asked to.


There's a world of difference between being a skilled solver of problems, and being a skilled extemporaneous presenter of your problem-solving process on the fly. Most people are not good extemporaneous speakers, and the reason they're fumbling is not because they're incapable of solving the problem, but because they're juggling the following things:

* Presenting the solution. * Determining the solution. * Presenting themselves.

It's not really an accurate measure of how well they work day-to-day, because none of us show up to work and are given 15 minutes to present a solution to a problem we've not studied in years.

You're basically testing peoples' ability to improvise a solution while discussing it with two or three strangers. It's not surprising that there's a high failure rate in that.


Doesn't matter. White-board-as-IDE can throw you off so much that you can't think right about the big picture idea, especially if talking in front of people you just met in a stressful interview. It's nothing at all like explaining an algorithm to a peer after you've been working there and feel comfortable, etc.

Whiteboard-as-IDE is just bad, all the time.


What would you use to ascertain good coding skill? It is impractical to provide someone with a problem set and have them come back after a week. To be honest - that's the approach I'd really love to do.


Why not sit with them at a computer, let them set up their own preferred working environment, let them have an interactive shell prompt within which to execute snippets of code while they tinker and develop the solution, etc.?

I don't understand your thinking -- it seems like you picture it as a dichotomy between asking trivia questions which must be on a whiteboard, vs. assigning an extensive college homework problem set -- both of which seem like terrible ways of assessing on the job skill to me.

The questions you would ask at the whiteboard are probably fine questions. It's the way you allow them to be solved that's the problem.

For example, if someone asked me to write some code in Python that computes the median of a stream of numbers, I would probably do something using itertools-based generators, and/or something using the heapq library for a heap.

I do not have the APIs of these standard modules memorized. I absolutely could not write down their usage on a whiteboard. It wouldn't just be minor syntax issues. It would be so much of needing to look up which function argument goes where, which thing has no return value but mutates the underlying data type, etc., that it would just totally and completely prevent me from being able to fluidly solve the problem or explain what I'm doing. The whiteboard nature of the discussion would be a total hindrance, alien to the experience of actual day-to-day programming.

And I've used both heapq and itertools for many years, time and again, in easily many thousands of lines of code each -- and I still always need to look up some documentation, paste some snippet about itertools.starmap or itertools.cycle into IPython, test it on some small toy data, poke around with the output to verify I am thinking of the usage correctly, and then go back over to my code editor and write the code now that I've verified by poking around what it is that I need to do.

That's just how development works. It does not ever work by starting with a blank editor screen and then writing code from top to bottom in a straightforward manner. It doesn't even happen by writing some then just going back in the same source file and revising.

100% of the time, you also have a browser with Stack Overflow open, google open, API documentation open, and you also have some sandbox environment for rapidly either pasting code into an interpreter and playing with it, or rapidly doing a compile workflow and running some code, possibly in a debugger, to see what's going on.

I do not understand why you wouldn't replicate that same kind of situation when you're testing someone. What you want to know is if they can efficiently tinker around with the problem, use their base knowledge of the relevant algorithm and data structure to get most of the way there, and the efficiently use other tools on the web or in a shell or whatever to smooth out the little odd bits that they don't have an instantaneous recall or photographic memory of.

In fact, if they do solve some algorithm question start to finish, it just means they have crammed for that kind of thing, spent a lot of time memorizing that kind of trivia, and practicing. That's not actually very related to on-the-job skill at all. By observing them complete it start to finish, you're not getting a signal that they are a good developer (nor a bad one) -- only that they are currently overfitted to this one kind of interview trivia problem. You do not know if their skill will generalize outside to all the other odds and ends tasks that pop up as you're working, or as you face something you don't have 100% memory recall over.

Anyway, the point is you can still ask development and algorithm questions, but you should offer the candidate a comfortable programming environment that is a perfect replica of the environment they will use on the job, with their own chosen editor, access to a browser, same kind of time constraints, comfortable seating, privacy, quiet, etc.

And you should care mostly about seeing the process at work, how they verify correctness, how they document and explain what they are doing. If you're asking problems where mere correctness is itself some kind of super rare occurrence, like some kind of esoteric graph theory problem or something, you're just wasting everyone's time.


My reply is about 2 days late. But thank you for the feedback... I am genuinely trying to improve the process since I've been at the receiving end of it at one time as well.

I'd definitely like to run something like this but I'd need folks to install a good screen sharing tool (join.me, webmeeting or some such thing...). But I'll definitely be open to asking the candidate's willingness to do so. That way they can get working code in an environment they are comfortable in...

We do most interviews remotely and offer a remote work setup as well. So its always not practical to physically have the person code in front of me.


One of my most enjoyable experiences as a candidate was when a company shared login information with me for SSH-ing into a temporary virtual machine they had spun up at Amazon S3 solely for the interview. They asked me what editor I'd like present, and separately made sure any environment details were taken care ahead of time.

Then I was able to simply log in with my shell here at home, and the screen was shared with the interviewers. The whole interview took place in console Emacs, with the interviewer pasting in a question, me poking around and asking clarifying questions, then switching over to IPython, tinkering, and going back and writing code incrementally.

I think all of the modern front-end services that do this kind of thing are pretty terrible, like Coderpad, HackerRank, TripleByte, or more UI-focused screensharing tools. Heck, I'd even opt for just a Google Hangout if we had to do it by UI screen sharing.

I think the low tech route of SSH is vastly superior.


To be fair - we don't look for syntactic correctness of your solution.

Hey, that's great. But the thing is, you never know what you're going to get.

Some interviewers absolutely do insist on 100% syntactical correctness (along with optimal performance on some made up combinatorial problem) -- even though they aren't giving you a shell or IDE to run your code iteratively. Sometimes they won't even give you a decent text editor -- though it may sound ridiculous, it's become very common, of late, for interviewers to ask you to just type directly into a Google Doc -- with variable-width fonts, autocapitalization and other helpful features enabled by default -- even at places where you'd think they really, really ought to know better.

It also make it a point to mute my phone and not to talk unless asked to.

Again, it sounds like you're hip as to the basics of how these sessions should be run, and that's great.

Unfortunately, it's not generally so, out there. Quite a few interviewers seem oblivious to the basics of phone etiquette (using speakerphones with an obvious echo behind them, for example). Or just aren't particularly communicative for one reason or another. And sometimes it turns out the person you're talking to doesn't really know the language you're coding in -- so you have to burn precious minutes explaining the basics of the language to them, along with the solution you're presenting.

That's the fun part about these sessions. You just never know what you're going to get!


Finding a whiteboard and brainstorming about how to solve a problem is absolutely part of the job. Unfortunately it's hard to find problems that can be explained to anyone with a reasonable programming background and solved in 30 minutes. So anything you do under those constraints is going to be artificial.


> Finding a whiteboard and brainstorming about how to solve a problem is absolutely part of the job.

But let's face it, this skill is trivial to an otherwise intelligent person, and it's not the reason whiteboard coding is done at the interviews. It is to assess one's problem solving or even specific coding skills. Unfortunately, in a nearly QM way, observation here affects the outcome.

I did partake in CS competitions at regional level and to me they are less stressful than whiteboard tests. There you just have a console or a sheet of paper and a few hours to hash it over. No 3 pairs of eyes staring at your back. Guess it's the same for many others: the thing that turns reading a figurative newspaper chess column into chessboxing tournament. One might be good at chess and OK in boxing, but not necessarily at the same time.

(and no, unfortunately I don't see a good way to fix this)


And if you overthink it (how can you not? no idea what they're looking for) you appear to clutch or ask too many questions. No shared context means a very artificial exchange.


My team has impromptu whiteboard meetings lasting from 5 to 60 minutes at least once per week. Being able to diagram and communicate your ideas is one of the biggest things we are looking for. We have already done phone screening and an in-office coding exercise by this point, so the whiteboard is a test of how you might integrate with the team.


"Being able to diagram and communicate your ideas is one of the biggest things we are looking for. "

To me this seems like a basic requirement like reading and writing - is it really such a hard skill it's worth it to filter for it? I would think it's fairly easy to learn this skill by attending meetings and watching others if somehow one is unfamiliar with this technique? Or is my expectation level of what people generally can do way off mark?


If that's the case then presenting on a whiteboard certainly should be part of your interview process. My post is a complaint about the ubiquity of it and that it's used inappropriately, not that it's a thing teams use at all. I underatand that it has it's place.


I've seen a correlation in working mode between pairing on a competition-style programming problem (to evaluate a job candidate) and pairing in real work.

I'm not looking for leaps of insight though (I explain the insight required), and we're both in front of the IDE and can search the web to clarify simple questions. It's more communicating a problem and the outline of a solution, and seeing if someone is able to understand what you say and is fluent in turning ideas into code in their chosen language.

I personally despise gotcha questions that rely on you either having seen the problem before, or getting lucky enough to spot the insight in a pressure situation.


This explains why:

From the article:

From Terence Tao:

professional mathematics is may be quite different from the reality. In elementary school I had the vague idea that professional mathematicians spent their time computing digits of pi, for instance, or perhaps devising and then solving Math Olympiad style problems.

In real life its the same. You don't cook up interview coding problems and solve them whole day. You have real world work to do and often that requires a degree of productivity, not knowledge. This is even more so true given how cheap and easy access to knowledge has become because of the web.

From GH Hardy:

it is useless to ask a youth of twenty-two to perform original research under examination conditions, the examination necessarily degenerates into a kind of game, and instruction for it into initiation into a series of stunts and tricks.

Notice how closely this matches with interviews which mandate people to demonstrate expertise in trivia. Or quickly state the Big-Oh complexity of some sorting algorithm.

From Andrew Wiles:

Real mathematical theorems will require the same stamina whether you measure the effort in months or in years [...]

Almost any real measurement of algorithm expertise is in seeing how good a person in coming up with a new algorithm for a novel problems. What exactly is your knowledge of 100 sorting algorithms worth, when it can be searched for in a google search which takes a few milliseconds.

Interview Algorithm guru's to me are no better than those smart elecs who used to show up in school having memorized multiplication tables and then demonstrate that as some kind of mathematical ability.


Love / hatred of the whiteboard process being of course a perennial topic here, there is one major distinction between Math/CS olympiads and the typical industry whiteboarding session: with the Math/CS olympiads, you can count on a reasonable degree of professionalism (in terms of forethought and judgement, and overall solidness of execution) -- not to mention basic common sense -- behind both the curation of the actual problem sets, and in the design of the test environments. These people know what they're doing, and their process is mature and generally well accepted. And the actual problem solving environment is designed for maximum efficiency and fairness.

In the tech industry, however -- here and there you'll find companies that know what they're doing: they actually put a lot of though into picking reasonable problems to solve, and present the candidates with reasonable conditions for doing so. They're clear in stating both the problem and what they expect; and the interviewers are reasonably personable, and have great communication skills.

But quite often, it's a total shit show: problems are often poorly stated (and sometimes ridiculously complex); combined, importantly, with a poor or completely absent statement of what is really expected from the candidate (As in -- do they want a perfect working solution, on running code? Or does it suffice to just outline the general idea, perhaps with pseudocode? Quite often this is never stated up front); along with gratuitously taxing and sometimes downright annoying conditions in which to tackle this allegedly crucially important problem you're asked to solve (among my favorites being: whiteboards with barely usable markers / erasers; or their electronic equivalent -- Google Docs, or other ridiculously unusable coding "platforms"; voice-only sessions of nearly any kind, but especially those where the interviewer clearly has limited communication skills for one reason or another; and then of course sessions where the interviewer doesn't know the language you're coding in very well, and you have to constantly pause to explain the basic facets of said language, along with the solution itself).


For such things like coding an optimal algorithm from scratch this might be true. But there are interview questions which have a more practical appealing, and this definitely does not apply to math Olympiads.


George Pólya claimed that the British emphasis on puzzle-solving had set British mathematics back a hundred years. He and Hardy tried to get rid of Cambridge's emphasis on the Tripos exam and the whole "Senior Wrangler" thing.[1] They didn't entirely succeed.

[1] https://en.wikipedia.org/wiki/Senior_Wrangler_(University_of...


When I got to Harvard, probably the best Putnam/puzzle types of solvers there (among the students) were Don Coppersmith and Angelos Tsiromokos. Don went on to do very important work in cryptology. Angelos went on to leave mathematics; his next gig was as a translator for the common market. (He was probably better in word games/puzzles -- Scrabble, crosswords, and so on -- in English than I was, even though it was his third language.)

Ofer Gabber and Ron (Ran) donagi also did very well on a semi-formal Putnam, and did so at very young ages. They went on to decent math careers.

I also took the Putnam at very young ages, but never cracked the top 100. I went on to leave mathematics.

Nat Kuhn was perhaps the best of the undergrads then. He went on to be a psychiatrist.

Andy Gleason was perhaps the best at that kind of thing among the faculty. Wonderfully nice guy, and my de jure thesis advisor, which was a bit awkward because he never got a PhD himself and didn't quite understand my stresses; I didn't realize the no-PhD part until after the fact, when I saw his resume in connection with his election as president of the American Mathematical Society.


Elder Harvard math professors who have PhDs don't really empathize either, because they are in the very high end of the talent spectrum, and they came up in the time when massive breakthroughs were told for picking.


Nowadays the word 'coach' usually applies to athletics, but it dates to the 1830s, when Cambridge University started awarding math degrees by competitive examination. A coach was someone who gave you a straight, smooth ride to your degree, just like a horse-drawn coach on one of the new paved roads that began appearing in England around that time. So coaching was hip.

The high scorers on the exam were a Who's Who of British science in the 1800s. In 1854, for example, the second highest scorer was James Clerk Maxwell, the greatest physicist of the century, who gave humankind its first look at a fundamental law of nature. The guy who beat Maxwell became a coach and spent the rest of his life teaching people how to do well on the exam.


Note, however, that Grigori Perelman was very good at mathematical olympiads.


And Terrence Tao who is quated was also extremely good at olympiads. As I see it, the olympiads are a good cost-effective way of identifying talented kids, they are in a way just IQ tests.


I think the point being made is that the converse need not be true: kids who are not good at these sports need not feel discouraged from pursuing mathematics.


Yep. Success in math contests is correlated with success in producing mathematical research/getting tenure/measure what you will. But if you sample math professors, you won't find that most of them were problem solving champions. (A large minority perhaps, and probably more in number theory.) The common factor is more likely that they really like doing mathematics, and have the skills needed to suceed in secondary aspects of the job.


But what does it mean when the people preaching this claim are all Olympiad winners?

The one math genius I know who despised Olympiads, ended up leaving academia over a famous but wrong proof.


I know plenty of very good mathematicians who did not participate seriously in this kind of thing. (Disclosure: I work as a mathematician at a research university.) I'm not sure why the only ones quoted here are mainly ex-winners. Indeed, while these competitions may have the positive effect of putting like minds together, it's possible that they have the negative effect of discouraging those w/o the aptitude for this particular kind of competitive sport (which is what it is), or who do not have access to the kind of coaching and practice that successful competitors often have.


Keep in mind you can get Fields medal only if you are under 40 years old. That's more likely when going into academia maths straight away, without enjoying relatively lucrative industry jobs like most of IMO contestants I know do.


We also need to hear from an important segment of the math population: those who burned out or dropped out for reasons related to math perceptions engendered by the math-competition culture. We all know people who did great at math or science competitions in high school but just disappeared from the scene after that. One may say that there are other underlying causes that lead them to not live up to some hypothetical promises, but I strongly feel that success and failure at math competitions can be a cause in itself. A lot of space is dedicated to how math olympiad triple-gold medalists went on to become great mathematicians. We also hear about those like Grothendieck and Hardy who weren't big in the competition circuit.

But missing are the stories of those who didn't make it big in spite of great competition performance, and those who fell out of math because of failing at math competitions.

In India, for example, competition math is everything at the high school level. This is because competitive exams like the famed IIT JEE, etc. are essentially variations on the competitive math theme. A few serious math enthusiasts do take up broader math-specific exams for math institutes, but those numbers are minuscule. The worst affected in my experience, are the talented and the enthusiastic who were discouraged and/or dropped out altogether because of failing at optimizing their skills and learning for competitions and similar exams.


This seems like a more general portrayal of how innacurate tests demonstrate domain knowledge. It is true for Math Olympiads, but also true for a wide range of subjects. Nonetheless, this is stil relevant to provide a hint of what that person might know.


I participated in my state's math competition, but never made it to the finals. Yet it didn't discourage me from attending college and majoring in math.

I also participated in the music competition, called "solo and ensemble festival." Like the math competitions, music competitions are an artificial environment -- one student in front of a judge, rarely any audience. But in some sense they are "real world" because they mimic the auditions that are very much a real part of a music career, e.g., for getting music scholarships and entry into most orchestras. I never got that far.


Qualitatively, it's the same problem that a lot of people have in grad school. Doing homework from a textbook isn't the same as the uncertainty of research.


Success at mathematics requires staring at it until you can understand it, however long that takes


Well, no -- it's not enough to just stare at the problem / thing.

What you have to do, effectively, is become at one with its true nature. Which in general is much more difficult than simply starting at it.


> become at one with its true nature

That's very vague.


It is. Which is why it's so difficult.


pardon the shitpost, but that is a compilation of one of the most profound and useful quotes i've ever seen. the difference between superficial achievement and real contribution is profound, and most of our systems are designed to reward and reinforce the first at the expense of the latter.

"They’ve done all things, often beautiful things in a context that was already set out before them, which they had no inclination to disturb. Without being aware of it, they’ve remained prisoners of those invisible and despotic circles which delimit the universe of a certain milieu in a given era."


Yes. This quotation from Goro Shimura sums it up, IMHO:

> "Though such a competition may have its raison d'être, I think those younger people who are seriously interested in mathematics will lose nothing by ignoring it."

And that's what it comes down to. Are these competitions fundamentally necessary to someone's development as a mathematician? Shimura seems to be saying something like, "Eh. Not really. If you don't like them, you don't need them."


I've never done these competitions but I think that a good argument for participating in them is that you might meet like-minded people.


On the other hand, if you don't like the competitions, you probably would not meet like-minded people going there (assuming other participants like them).

Maybe we need alternative avenues for maths nerds to meet each other.


You certainly would. You're still talking about some very nerdy people.


But it's very hard to design a feedback loop with a short enough cycle to properly reward the latter.

It makes me think of the line about raising children, that it's better to say, "I recognize that you worked really hard on that, it looks wonderful!" than "Good job! You're so smart!" [1]

because one captures the reason why they did a good job. By calling that out, you can perhaps reinforce behavior with a longer view.

[1]: http://www.theatlantic.com/education/archive/2015/06/the-s-w...


Yeah, my parents and all my teachers told me I was 'gifted' from 2nd grade, and I really feel like it held me back, overall. Yes, I picked stuff up quickly, but I was also lazy as hell, and given no real incentive to learn how to not be lazy until it was too late to turn around my disastrous grades.

I have a son now, and I'm going to try to avoid calling him smart or gifted. Or at least not telling him that he's smarter than the other kids.


I had a similar experience: that kind of reinforcement encouraged me to try to do the least work possible to get the same results as others in order to prove how gifted I was, instead of working hard to go above and beyond. It was a rude awakening when I got to the point in life when I wasn't competing against my peers to complete set tasks, but rather competing against them to provide the most valuable contributions to a research group or company.

However, I also believe that this kind of explanation can be harmful because it puts the blame for my laziness on others. Even though the research supports the idea that this effect occurred, ultimately I got past it by focusing on my own agency.


The stronger lesson I took away from the article on reinforcement is that praising hard work makes the child less afraid of failure and more apt to persevere. Otherwise, failing a task is a scary symptom of not being smart enough.


i have two young kids (3 & 2). when they do something "clever", it is always the thing they've done that's clever, not them.


> most of our systems are designed to reward and reinforce the superficial achievement at the expense of the real contribution.

No one cares about your competition results after you start publishing papers, so not really. So if you publish shit papers it doesn't matter how well you did on competitions, you will never get a good job in academia (or anywhere else for that matter unless you learn a useful skill like programming).


yep. the "..no inclination to disturb" part reminds me of this NPR piece and how admiral mike mullen hired the lead doctor ( http://www.npr.org/sections/health-shots/2016/06/10/48156831... ):

[snip snip]

It was 2008 and Army surgeon Christian Macedonia had been told there was a high-level opening for a doctor who wanted to change the military's approach to battlefield brain injuries. When Macedonia arrived for the interview, he found himself face to face with Adm. Michael Mullen, chairman of the Joint Chiefs of Staff.

"And he looks at me and he goes, 'Who are you and what are you doing in my office?' " Macedonia says.

Macedonia explained he was there about the job. Mullen replied that he had decided he didn't need a doctor on his staff. "And I said, 'Sir, I'm going to disagree with you,' " Macedonia recalls.

Macedonia, a lieutenant colonel, told the admiral that if he really wanted to do something about brain injuries, he did need a doctor. What's more, he needed one with combat experience, strong scientific credentials and a high-level security clearance. "I said, 'Sir, you really only have one person and that's me.' "

Mullen smiled. He had been looking for someone he might have to rein in, but would never have to push. "And Macedonia fit that model for me perfectly," he says. "He's very outspoken, very straightforward. We talk about out-of-the-box thinkers; he just lives outside the box."


To call it superficial achievement is too strong: there's a whole lot of hard, vital work in Kuhnian 'normal science' https://en.wikipedia.org/wiki/Normal_science .


I find the articles on LW to be generally of high quality (setting aside the HN bias against Eliezer Yudkowsky).


> HN bias against Eliezer Yudkowsky

Excuse me?


Eliezer is one of LW's more controversial writers. After the Roko's Basilisk fiasco erupted in LW, it's been pretty hard to bring his name up in HN without people posting scathing critiques and attacks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: