Hacker News new | past | comments | ask | show | jobs | submit login

I'm not so sure I agree with you, because there is something to be said against your counterexample. In java, I can reason as follows: the Java Dependency Injection Framework is gigantic and convoluted, therefore it is going to be complex and difficult to manage.

In Java there is a sort of ceiling, one might call it an upper bound on the mindfuckery per square inch of code. In 100 dense lines of lisp, there is no a priori reason to assume that no one has written a recursive descent macro-expanding s-expression handler for arbitrary routing of network packets in a rules-based DSL. In the same amount of dense java code there are rarely more than 200 method calls, all statically checked, and rarely more than 100 variables, typed and scoped.

From John Carmack's recent article on static checking: "If you have a large enough codebase, any class of error that is syntactically legal probably exists there." Now, he is concerned with actual defects, but the same rules apply even more strongly for stylistic rules. As the size of a codebase increases, the probability that any particular language feature is absent approaches zero. And in a language like lisp, where you can write your own language features, this is moderately horrifying.

The fact that you can replicate java's dependency injection in a few orders of magnitude less code in lisp is not a comfort to me. Because the 10,000 lines of code in Java's Dependency Injection Framework is a red flag to me. The chances that someone who writes the same thing in lisp has drastically simplified the implementation are not so high.




the 10,000 lines of code in Java's Dependency Injection Framework is a red flag to me.

We agree.

The chances that someone who writes the same thing in lisp has drastically simplified the implementation are not so high.

We disagree.

First, when I roll my own, I scratch only my own itch. I don’t need to build something that works for everyone, everywhere. It’s like Microsoft Word: MSFT boasts that most users only need 5% of what it does, but every niche of users uses a different 5% of the whole thing.

But can I roll my own? Well, I suggest that the answer is more likely to be “yes” in Lisp than in Java. First, folklore suggests that defects are constant per line of code. Therefore, if I need fewer lines of Lisp than of Java, I should have fewer defects to contend with. I assume three things. First, I need much less than the full framework’s functionality. Second, Lisp is more expressive than Java, so I need fewer lines of Lisp for any functionality than of Java. Third, I suggest that languages with meta-programming support are particularly well suited for tasks like dependency injection, reducing the amount of code I need to write even further.

Now, the big DI framework is written by someone else. So is it free? No. I need time to learn it, time to use it, I can make mistakes using it, I can get an XML configuration wrong, I can implement an interface when I am supposed to extend an abstract class, I am not immune from defects just because I am using a library.

So the net question for me is whether the chances of successfully rolling my own feature in Lisp for my project’s specific needs are greater than the chances of successfully using an existing framework in the Java world=, and parallel to that, the question is whether someone else working with my code will find it easier to decipher the XML configuration and interfaces and classes I have written to work with a Java DI framework or will find it easier to work with the smaller, simpler and more compact Lisp code written for this specific project.

Reasonable people can go either way on this, I find it hard to believe that Java is “obviously” a win, especially if they’ve used one of these big frameworks with the many gotchas (as I have).

p.s. Of course, the wild card is that there are plenty of libraries in Lisp as well. I reject the notion that every Lisp programmer reinvents everything from scratch: https://github.com/lprefontaine/Boing


Keep in mind that a less verbose expression is not necessarily


Sorry, I didn't see that this actually posted. What I intended to say is that a less verbose expression is not necessarily easier to understand. The article is really about the balance between elegance (or performance) and accessibility. An expert coder working with a powerful language like Lisp can implement a lot of functionality very quickly, but there is a point where less skilled programmers working with common understanding of a less flexible language can implement more functionality more quickly, simply because there are more of them working in parallel.


This is the kind of thing we say about programming all the time without evidence. We don't know this, or anything like it.


Strictly speaking yes, it's a hypothesis. But the fact that the programming ecosystem looks as it does constitutes some evidence in favor of that hypothesis. Were it otherwise, you'd expect the professional programming world to be economically dominated by Lisp and a handful of super-programmers. Yet that isn't what we see. Why not?


That's the stock objection. Here's my answer: historically speaking, we've barely started. Software is the first mass endeavor of its kind that humans have tried. It belongs to a post-industrial era that can be expected to take a long time to work itself out. Under such conditions, social proof doesn't work. Whatever the rational way of making software turns out to be, statistically speaking it hasn't been tried yet.

Will it turn out to be "Lisp and a handful of super-programmers"? I don't know. What we need is an age of experimentation. The great thing is that startup costs are now so low that we are beginning to see that happen. Emphasis on beginning.


That argument seems a little too convenient; we are after all talking about a field (and a language, Lisp) that's been around for over 50 years. I could certainly see pockets of inefficiency persisting after such a time, but I would hardly expect the exception to be the rule at this point.

Keep in mind that I'm only suggesting that a crossover point exists, I don't pretend to know where exactly it is. In order for me to be wrong, a single superior programmer would always have to be better than two slightly inferior programmers working with a slightly less expressive language. I strongly doubt that this is true. The simplest explanation for what we observe is that in fact a team of inferior programmers working in parallel can be more efficient than a single superior programmer working alone. Not always, but often enough to prevent more expressive but less comprehensible languages from becoming dominant. What constitutes "expressive" and "comprehensible" will evolve over time, as you suggest (maybe Lisp will someday become tomorrow's Java!), but the underlying scaling law will remain.


This is a fascinating conversation. I've always had trouble working in teams, so I'd like to believe that superior programmers will out in the end. Or at least that they will in a few problem domains.

But I wonder if this is wishful thinking, if this isn't just another case of the prisoner's dilemma. Perhaps like how cities with mostly poor people would collaborate many times in history to conquer neighboring barbarians, even though the barbarians had more freedom and were thus richer. (See http://en.wikipedia.org/wiki/Fates_of_Nations.)

Then again, there's reason for hope. Perhaps the parallelizable sort of programming is more menial. It certainly seems that way with the way communication costs overtake large teams. It's almost like Vernor Vinge's zones of thought (http://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep, http://www.youtube.com/watch?v=xcPcpF2M27c) - as your team grows bigger you can just watch the members grow dumber in front of your eyes as more and more of their cognitive effort is eaten up by internal communication, leaving less and less for externally-useful work. If this is true, there's hope that advances in programming will automate the low-cognition tasks and allow programmers to focus on the high-cognition ones, leveling the playing field for small, high-cohesion teams.

---

Me, I've been obsessed with something raganwald said when he spawned this tendril of conversation: exercising explicit control over the space of inputs my program cares about. My current hypothesis: eliminate fixed interfaces, version numbers, and notions of backwards compatibility. All these are like petridishes of sugar syrup for code to breed more code. Replace them with with unit tests. Lots of them[1]. If I rely on some code you wrote, and I want to pull in some of your recent changes, I need to rerun my tests to ensure you didn't change an interface. Programming this way is less reassuring, but I think it empowers programmers where abstraction boundaries impose mental blocks. Great programmers take charge of their entire stack, so let's do more of that. I'm hoping this is the way to prove small teams can outdo large ones.

[1] Including tests for performance, throughput, availability. This is the hard part. But I spent a lot of time building microprocessor simulators in grad school. I think it's doable.


I'd like to believe that superior [solo] programmers will out in the end

I think you're wrong (sorry!) because it's impossible to talk about superior programmers without talking about teams. Building complex systems is a team sport. There's no way around this. But you can't have good teams without good programmers.

The phrase "scaling complexity" has at least two axes built into it: the abstraction axis -- how to get better at telling the program to the computer -- and the collaboration axis -- how to get better at telling the program to each other. Most of this thread has been about whether we suck at the former. But I say we really suck at the latter, and the reason is that we haven't fully assimilated what software is yet. Software doesn't live in the code, it lives in the minds of the people who make it. The code is just a (lossy) written representation.

We can argue about how much more productive the best individual working solo with the best tool can be- but there's no way that that model will scale arbitrarily, no matter how good the individual/tool pairing. At some point the single machine (the solo genius) hits a wall and you have to go to distributed systems (teams). One thing we know from programming is that when you shift to distributed systems, everything changes. I think that's true on the human level as well. (Just to be redundant, by "distribution" here I don't mean distributed teams, I mean knowledge of the program being distributed across multiple brains.)

Maybe you wouldn't have trouble working in teams if we'd actually figured out how to make great teams. So far, it's entirely hit and miss. But I think anyone who's had the good fortune to experience the spontaneous occurrence of a great team knows what a beautiful and powerful thing it is. Most of us who've had that experience go through the rest of our careers craving it again. Indeed, it has converted many a solo type into an ardent collaborator. Like me.

I was originally going to write about this and then decided not to go there, but you forced my hand. :) Just as long as it's clear that when I say "team" I mean nothing like how software organizations are formally built nowadays. It's not about being in an org chart. It's about being in a band.


The phrase "scaling complexity" has at least two axes built into it: the abstraction axis -- how to get better at telling the program to the computer -- and the collaboration axis -- how to get better at telling the program to each other. Most of this thread has been about whether we suck at the former. But I say we really suck at the latter, and the reason is that we haven't fully assimilated what software is yet. Software doesn't live in the code, it lives in the minds of the people who make it. The code is just a (lossy) written representation.

Ah, you're right. I was conflating the two axes.

I'd like to be part of a 'band'. I've had few opportunities, but I've caught the occasional glimpse of how good things can be.

Since that whole aspect is outside my ken, I focus on expression. Hopefully there's no destructive interference. I would argue that what you call abstraction is about 'communication with each other' more than anything (even though it breaks the awesome symmetry of your paragraph above :)


No, you're right. They're not axes.


To me the big question is how we're going to scale up complexity. The million-line programs we have today are already an absurdity. What are we going to do, have billion-line programs? Anyone who can figure out to provide equal value with 100x less code (edit: that grows, say, logarithmically in size rather than superlinearly) is going to have an edge. Brute force won't work forever. Plus it gets extremely expensive as one approaches the limits.


Which brings us back full circle to the Mythical Man Month. While I've argued here for a crossover point where more programmers = more productivity, I acknowledge that there is a similar crossover point going the other way, where more programmers = less productivity. Finding the sweet spot in-between is the art of organization, and not yet a science.


Have you read the mythical man month? More less skilled programers do not make things go more quickly. That idea is so wrong on so many levels that I dnt know where to begin.


MMM said adding more programmer to already late project doesn't make it finished faster. It didn't say having more programmer who already understand the project will make the project finish late. If that is true then one programmer must always be the optimal number to complete any project, which is clearly wrong.

Also, "less skilled programmers" are not always "incompetent programmers" so more of the former who already understands the system may be able to complete the project faster than fewer "more skilled programmers"

Have you really read the mythical man month?


Yes I have. Do you remember why he said more programmers is a problem? Because it increases the amount of communication. Because of this, few higher skilled programmers get more done than more less skilled programmers, largely because less communication is required.

I've witnessed this directly many times over the years.


MMM does indeed say that too many programmers do indeed make a project late regardless of how far behind that project already is. Nine women can't have a baby in a month and all. Have you really read it? The book makes this quite clear. The Wikipedia page doesn't.


Perhaps I'm missing something here. Argument 1 seems to basically be that Java code is bigger than Lisp code. Argument 2 is that big code bases tend to use all of a language's features.

Why couldn't you solve the second problem with the first? As you've already pointed out, Lisp code is smaller. Thus, you don't have to worry about the "large code base" problem like you would with java.


A couple lines of particularly dense Perl can make life pretty terrible, e.g. compared to many thousands of lines of getter-setter Java. You can pack more language features into a smaller area with a more expressive language. It follows that a smaller area might hold more complexity and likely more opportunities for errors, and therefore bugs.

At the very least, we can say "the LOC-bug relationship doesn't necessarily hold across languages". It's a useful rule-of-thumb, not a universal law.


That is only the case with actual complexity required by the problem, not incidental complexity caused by the limitations of the language (such as is common with Java).


While I don't agree with everything you say, I'm definitely reusing the term "mindfuckery per square inch of code".


This is related to the One True Code Quality Metric, WTFs per minute of code review.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: