Jane Street has been doing this since they were much smaller. I interned there when they had like 300 people, and they were actively cultivating a great brand as an employer back then too—and looks like they've really made it work over the last decade!
My impression with them in general was that they were willing to do lots of things that did not "conventionally" make sense at their size, and those things paid off. The internship program, for example, was relatively large in comparison to the size of the company at the time (≈50 interns?) and had lots of structure (events/talks/classes/group projects/etc) like you would expect from a big tech company, not from a 300-person firm.
They were also willing to build a lot of tools in-house like their own build system[1], their own code reviews system[2], etc. Most people would see this as wasteful NIH but I'm convinced it was a net benefit for them—they managed to be so productive in an absolute sense and especially on a per-engineer basis because they were willing to build so much themselves, not despite it. I'm sure the same thing applied to their recruiting efforts.
The biggest thing I took away from my internship was how much conventional wisdom in the software world was not necessary or true.
> Most people would see this as wasteful NIH but I'm convinced it was a net benefit for them—they managed to be so productive in an absolute sense and especially on a per-engineer basis because they were willing to build so much themselves, not despite it.
It paid off for them, but I would say it's a risky move in general - it is very easy to get sidetracked with failed projects that have nothing to do with the business. There are many examples - Uber's internal chat system, etc.
Others tools were more a necessity, eg the OCaml build story was not great back then. Investing in OCaml itself was a calculated risk.
Also I wonder how things would be different if they were starting today, where there are more mature tools in the space.
I often dread when I see an employer using in-house systems for common tasks like project tracking and code review. I'm curious how does JS's internal tools stack up against systems out there, like Github or Jira.
I prefer the JS thing to github, mostly because it makes it a lot easier to juggle lots of in-flight changes that depend on each other while simultaneously supporting code review.
> how much conventional wisdom in the software world was not necessary or true.
Do you remember any specifics?
I’m also curious how their tooling made them so much more productive. Were the tools just really well designed, or did they integrate perfectly with each other?
I used to work there - the jane street code review software is awesome, kind of like graphite but it works reliably. You can write a big tree of PRs. PRs get reviewed separately and you can merge them in any order at your leisure, without worrying too much about rebase issues or clobbering review or whatever. I would love to have some open source thing like that that actually works nearly as well. It may exist, but I haven't seen it yet.
And yeah, jane street is a pretty compelling demo that A) NIH syndrome is fine if you're good at writing software and B) it doesn't really matter that much if you use a mature language or some uncommon immature language
Revup does a good job of integrating tree-of-PRs workflows into GitHub, and is also designed so that one developer can use it in a way mostly transparent to reviewers or their colleagues. I _think_ that Revup + Reviewable.io would match much of the capabilities listed in the linked talk.
Because it is. What is the point of reinventing these wheels when gazillions of man hours have already been invested on open source tools that can do it better and cheaper?
1. You can do it better, with better taste. Existing tools are... not uniformly well-designed.
2. Building something for yourself is qualitatively different than building something for somebody else. (I've heard this described as "situated software"[1].) Both the results and the process are different.
3. Building something yourself lets you become an expert in the domain and the tool you're building, often faster and deeper than using somebody else's system. It's a way to build up tacit knowledge and institutional capital as much as (or even more than) software.
4. More often than people realize, building something yourself ends up simply faster than first learning and then wrestling an existing tool into the exact shape you need. I've seen a lot of teams waste way more time trying to get some existing thing working than they would have spent building their own thing.
Obviously it isn't always true that building your own version of something makes sense, and nobody is going to be building everything from scratch... but it makes sense far more often than conventional wisdom dictates.
Moreover, one thing NIH syndrome handwavers miss is that developing a custom situation-specific solution allows you to expand your knowledge in that discipline, create a more tailored solution, and avoid supply chain vulnerabilities that can come with using third-party packages. In my experience a handbuilt solution to a problem is going to be much more efficient in most cases than OSS out there, except in spot instances where the scale of the task is not able to be achieved in a small amount of code (e.g. a versatile graphing library), although these are /very/ few and far between.
You can, or you can just work around the existing tooling, which is quicker and assuming you choose wisely, has a wealth of googleable documentation. Its humbling, because you need to learn through _using_ rather than creating. It feels less productive, even though you're 90% of the way there with an existing tool
> 3. Building something yourself lets you become an expert in the domain and the tool you're building,
It might do, or it also might just drown you in complexities of the domain that you're building the tool for. Plus the assumptions you make when first designing the tool tend to be disproven as you learn more. Its good to question tools, but not blindly re-write them without studying _why_ they do that thing in that weird way.
> 4. More often than people realize, building something yourself ends up simply faster than first learning
Which means you are very likely make the same elementary mistakes as the previous generation tools. Plus its _always_ slower to start. mainly because naive re-writes have naive bugs. But you just don;t know it yet.
At the legions of VFX companies I've worked at, the number of people who look at the asset management system and go "Oh thats not hard, lets just re-write that to be x" is too damn high. 6 engineers and a year later, they still have a broken system, but in new and interesting ways.
Am I saying that you _shouldn't_ make tooling/software or custom things? no. I'm saying that you should really save yourself for something that critical to the company.
All projects have a limited number of innovation tokens. The more you use the slower your project will go. You should really only look to use 2 innovation tokens max.
I'd say the key insight that I see on #4 is that people do not consider libraries code. They consider it black box. It is the same urge that causes teams to label a code base they have taken over "legacy" and "in need of rewrite."
Sometimes, the answer is to dive in an seek to understand it. Now, that may end up meaning that you fork the tool or contribute back to it. It may mean that you choose to build anyway, and it may mean that you come back understanding it. However, if a library is not working for you, the slowest answer is often the one where you fiddle with it until it's working without taking the time to dive in.
By the numbers I think only the best programmers, which is a small minority, should consider NIH. Everyone else should use off the shelf as much as possible.
Since the best software is written by the best programmers, those of us interested in the best software, and have the capability to make such software, should be free to do NIH on an as needed basis.
I think it depends on if the targeted application is essential to your companies core buisness/product.
As an example, say if you're a small cloud vendor, you should probably write your own machine OS imaging automation, but probably not your own chat client.
I bring an inordinate amount of third part components inhouse for a solo dev company, so these tools/libraries only have one user. I do take return on investment into account and only bring things inhouse if I expect a positive net present value.
A few factors not normally taken into consideration;
- Tooling is a great place to practice transferable skills on a practical problem. This always annoyed me with the XKCD cartoon 1205 which didn’t take into account improvement in optimization ability from practice in optimization.
- I’ve had a few external library dependency rug pulls, open source and proprietary. It never happens at a convenient time and is a total pain in the ass. Contractual agreements won’t save you unless you’re ready to sue for breach of contract and even that is no real solution. Perhaps getting code in escrow could be an alternative but I’ve never seen that work out either.
- External quality is mixed and getting worse. I’m careful with my own code so most of the bugs I hit is in other people’s code. If it’s a small library it can easily be less work for me to bring stuff in-house than to debug someone else's crap.
Honestly I wish I didn’t have to bring so much in-house, it would have saved me a huge amount of work. But we don’t have an efficient market with clearly defined standard of goods where we could treat software as a commodity and I’m not sure if we’ll ever get that.
What do you mean, "reinventing these wheels"? Are you sure there are other tools that look and feel the same?
Letting your development team control and build their own tools is a good idea, if they are able to. You probably don't want to do it in Java or C#, it'll take too much time, too many people, and be too unreliable.
This implies that open source tools that do more (i.e. serve more use cases, work with larger amounts of software) work better for the narrowly defined use of a single company, even one as large as Jane Street.
My impression with them in general was that they were willing to do lots of things that did not "conventionally" make sense at their size, and those things paid off. The internship program, for example, was relatively large in comparison to the size of the company at the time (≈50 interns?) and had lots of structure (events/talks/classes/group projects/etc) like you would expect from a big tech company, not from a 300-person firm.
They were also willing to build a lot of tools in-house like their own build system[1], their own code reviews system[2], etc. Most people would see this as wasteful NIH but I'm convinced it was a net benefit for them—they managed to be so productive in an absolute sense and especially on a per-engineer basis because they were willing to build so much themselves, not despite it. I'm sure the same thing applied to their recruiting efforts.
The biggest thing I took away from my internship was how much conventional wisdom in the software world was not necessary or true.
[1]: First Jenga, now Dune
[2]: Here's a neat talk on how they do code review at Jane Street: https://www.janestreet.com/tech-talks/janestreet-code-review...