Hacker News new | past | comments | ask | show | jobs | submit login
Controversial Programming Opinions (2012) (blogoverflow.com)
145 points by wagslane on Dec 4, 2020 | hide | past | favorite | 328 comments



> Programmers who don’t code in their spare time for fun will never become as good as those that do.

I think this opinion is much misunderstood.

It should read: people who are not interested in what they are doing are unlikely to ever become very good at it.

People who are interested in a topic will have their brains work overtime at different times of day, will be picking up information even if they are not specifically looking for it, will be trying to understand stuff they are not specifically required to, and so on.

As pointed by some people (like David Allen) our brains have something called Reticular Activation System which, if programmed, brings to our conscious attention when something interesting happens. And then we will also have easier time keeping focus.

People who like coding will probably not see it as only work but also choose to do it in their own time and that usually results in private projects.

I do electronics at home and this usually includes some kind of microcontroller and coding, just because I like it. But I don't have public repositories and I don't do open source projects because I do my own fun projects at work and get directly rewarded for it.


This is also deeply misunderstood as an interview question. The idea that interviewers will reject candidates for not programming outside of work is a persistent meme now.

The truth is that (good) interviewers don't care when/where/how you acquired your skills and knowledge. They just care that you have the knowledge and ability, and also that your learning is on a consistent upward trajectory.

We ask if people do side projects not because side projects are mandatory, but because the goal of an interview is to surface any and every reason to believe the candidate is qualified. In practice, most engineers don't put side projects or hobby projects on their resume or even their GitHub profiles, and they won't spontaneously bring it up during the interview. Asking if they've worked on any fun hobby projects usually strikes up good conversation, loosens up the candidate, and maybe unearths some skills and interests that don't show up on the resume.

If a candidate says they don't program outside of work, it's not a negative. It's neutral. We just move on.


As an interviewer I can only agree.

I look for signs that the person is interested in what they are doing. Private projects with public repositories are a sign but nowadays I see people started showing up who force themselves to do these projects just to improve their interview prospects.

I even had one person admit it when it was obvious they can't tell me anything about their own project.

Obviously, this is a result of a very know effect where judging performance based on some measurements causes people to pervert this measurement.

Other signs that people overlook and misunderstand is deep knowledge on any topic related to software development. If somebody could acquire good, deep understanding of at least one topic it shows they were likely at least at some point interested in it.

People are sometimes put off that I seem to be asking large number of hard questions and then grilling to death, but that's what I am doing, looking for signs of being interested in something.

I even have one question where I ask: What is the question you would like me to ask you? That's just another occasion to have a discussion on something where the candidate can demonstrate their deep knowledge.


> but nowadays I see people started showing up who force themselves to do these projects just to improve their interview prospects.

As an interviewer, it's fascinating to read forums like /r/cscareerquestions where people are working hard to game the interview system. Coding Bootcamps have also become bad at training students to pass interviews by practicing coding questions and putting cookie-cutter "side projects" in their GitHub profile.

One of our local coding bootcamps had students post their class projects to GitHub as "side projects" to show to interviewers. The catch was that every graduate from the boot camp had the exact same "side projects" that followed the same formula.

As an interviewer, it's amazing to see people arrive at interviews with an idea that we're just robots trying to check items off of a checklist. The internet pop-culture view of coding interviews is that they're just arbitrary gatekeeping that can be bypassed if you simply know which magic words to say to your robot interviewers.

Tip for interviewers: Speak to your interviewers as if they were your future peers or managers.


That mindset pisses me off so much. Every time someone asks me about advice for an interview and I don't mention doing hours of leetcode studying they accuse me of lying and trying to sabotage them because _obviously_ the only way to pass an interview is to memorize every possible question the interview can ask! It's so crazy to me how people think this and then I'll sit down with them for a mock interview and watch them completely stop communicating or treat me like I'm a binary switch that only comes on when they have the optimal solution.


Nothing to be pissed about. You are correct and so are interviewers.

Interviewers and interviewees have different incentives.

My job as an interviewer is to see through bullshit as efficiently as possible and decide on some realistic compromise when selecting candidates.

I also sometimes look for job myself. Technically, I could go with the flow and do whatever other candidates are doing. But I don't want to and I don't need to. I have over two decades of both development and interviewing experience and I know that just being myself and not trying to impress anybody is already going to impress interviewers a lot.

I gladly offer information about which parts of the requirement I am lacking and also I will immediately say that I don't know something when I don't -- without trying to impress on the interviewer that I know more than in reality. This alone lets me stand out from other candidates.

There is one more reason not to try to game the process. It is the most important one. I want to be building trust with my manager from day one. Misleading about my knowledge and experience is best way to immediately taint that cooperation.


Yeah, I've noticed a big gap in interviewee's perception of what interviews are supposed to be like, vs what I look for when conducting an interview. Often times, I see people on reddit asking questions along the lines of "I've an interview coming up, what leetcode thing should I cram", but that really misses the picture that a good interview is nothing like a teacher marking a scantron school test.

The crazy part is that some of these interviewees even have work experience and ought to know better!


>Yeah, I've noticed a big gap in interviewee's perception of what interviews are supposed to be like, vs what I look for when conducting an interview.

This depends on the person and company doing the interview, I think. I work as a consultant implementing a specific software package and recently interviewed with a Unicorn trying to switch to mroe general purpose coding. The difference in the process is night and day. When I do interviews with consulting firms it's almost always someone with significantly more experience than me. And they are usually pretty free flowing and in depth discussions.

The Unicorn interview was 100% checking boxes. I had 4 leetcode style coding challenges and the only thing that really mattered was solving the problem. The discussion ones were pretty obviously interviewers asking questions from a piece of paper. The only goal there was to give them something good enough to write in the box so that it would pass later review.


I agree that big companies have interviews that look more checkboxy since they generally want interviews to be standardized/fair for candidates, but even then, a lot of the interview is not about the solution to the problem per se. For example, if the candidate struggles with basic syntax, or if they struggle with refactoring, or they don't react to feedback well, these are all things that get considered, even if they do solve the question optimally on paper.

I had one where we solved the problem half way through the session and went on a big tangent about how to engineer a solution to a related fuzzy problem.

I do agree though that interview standardization can lead to mediocre interviewer practices if the interviewer isn't truly committed to improving their recruiting skills (which is sometimes the case if a team is on a crunch, for example).


I agree. All of my consulting interviews have gone on tangents here, there and everywhere.. And focused more on ability to think, problem solve and be a human. Whereas engineering and software jobs are rote.


"Tip for interviewers: Speak to your interviewers as if they were your future peers or managers. "

That exposes a real problem. WIth people gaming the interview process that is good advice.

But it surely it is part of the process that leads to mono cultures? Workplaces with 25-35 year old white men? (Or what ever)

How can you get a diverse good group? Does diversity matter? I think it does, am I right?


I don't think being more professional/collegial and less dishonest in an interview leads to monocultures. Ideally it would just reflect well on the person for not obviously trying to game the system. If that actually leads to someone being less likely to get a job someplace, well that's a place you wouldn't want to work at anyway.


You don't need every job you apply for. You only need one.

When you get that one job, you want good relationship with your manager. Trying to game the system means your manager will have higher expectations that will not be met, causing possibly dissapointment or trust issues.

By trying to game the system you have almost 100% chance of damaging your future relationship. You need to ask yourself whether that's something you want.


Things become memes (they used to be called stereotypes) because there's enough truth behind them to make them relevant and "sticky"

The programming in your free time comes up all the time because people, like the OP, see it as a signal. If you do it, you're good. If you don't, you're bad.

And specific to your point - "do you code in your free time" as an interview question - if it didn't provide you with signal that you thought was useful in determining the quality of a candidate, you wouldn't ask it. So, why are you (interviewers in general) asking it?


Because I want to get to know the person a little bit, professionally. Because I want to learn if the person has some traits that correlate well with doing a good job later.

I don't think it is going to be very controversial when I say that people who are enthusiastic about programming, have experience completing projects, can solve difficult problems, can show they have knowledge of concepts, are intelligent and know how to work with other people tend to do just that -- perform better at their jobs.

Also, I don't ask "do you code in your free time". I ask for any projects private or professional they have been working on recently that they think are fun and interesting.


>If you want to build a ship, don’t drum up the men to gather wood, divide the work and give orders. Instead, teach them to yearn for the vast and endless sea. --Antoine de Saint-Exupéry

I've noticed this in my software work. When I'm on a project that I'm personally excited by, I feel like I'm firing on all cylinders and am vastly more productive. Driven by COVID fear, I switched jobs over the summer to a company that would allow me to work from home. I'm not at all interested in the project I'm working on. And I feel like it shows...I don't retain things that coworkers have already explained, I don't feel a drive to learn more than necessary about the domain, etc. Maybe some of it is just COVID-brain, but my goodness being interested in your work makes a world of difference.


That's a good view on that opinion. I've kids and a wife that has a job, the spare time is taken up by them and the time I need to rest. I would if I had the time, but I'm interested in each and everything when working.


As I get older, I start realizing that I don't want to spend my free time coding all the time. I do like to take that time to build something with my hands, fix things around the home. I feel those are good critical thinking tasks and great experiences that can be applied to programming.


#1: "Programmers who don’t code in their spare time for fun will never become as good as those that do." #17: "Software development is just a job."


I don't think there is any problem here.

Not everybody has to be uber excited about everything they do. I surely can empathize with every person that wants to have a well paying job but just don't feel anything towards anything that is well paying.

Just set your expectations realistically. If you are of slightly above average intelligence and you are not uber excited about programming (ie. typical developer), chances are you are never going to be rock star developer. Plan your career realistically taking that into account. Maybe work in area that doesn't require changing entire stack every 2 years?

My friend decided consciously (we had a discussion) he is not ambitious but still wants to earn. He is now Cobol developer working with people that are on average twice his age. He has been and will be well paid for many years. He works 8h day, has time for his kids and other interests. He says he couldn't be happier.


Programming is not the same thing as software development. Software development requires programming, among many other things. Programming need not be towards the acct of software development.

More specifically, I wouldn't call solving Project Euler problems "software development". It's a game, a puzzle, that requires programming. And I wouldn't call requirements analysis, UI design, testing, etc., "programming", But they are fundamental aspects of software development.

It's possible to treat programming as a calling and software development an occupation.


Pff, these are nothing. Buckle up!

* 1-based indexing is superior to 0-based indexing.

* If you use spaces to indent code, or in any way rely on a fixed width font to "line things up", you have a fundamental misunderstanding.

* Relatedly, the 80 column "rule" is as stupid in code as it would be in prose. Use your aesthetic sense.

* Vim and Emacs both have terrible user interfaces, and are an inferior experience to even a normal text editor let alone a decent IDE. Users have Stockholm Syndrome.

* For beginners, BASIC is great and Python is terrible.

* There are many languages with a 'mystique', like Lisp and APL. The correlation between 'mystique' and practical utility is inverse.

* Source code should liberally use Unicode symbols.

* Sigils to indicate variable type are actually pretty great, especially if enforced by the compiler. (See above regarding BASIC)

Let the flames begin!


> 1-based indexing is superior to 0-based indexing.

“Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration.” - Stan Kelly-Bootle


> Relatedly, the 80 column "rule" is as stupid in code as it would be in prose. Use your aesthetic sense.

Horizontal scrolling is almost always a bad experience for the user. But 80 is way too restrictive. I'd say no more than half a standard monitor wide is the character limit. What's a standard monitor depends on what's used by the devs writing the code of course.

> Source code should liberally use Unicode symbols.

Give it a few years. But make sure there's a good normalization (there are several ways to print the same glyph on the screen that are different bytes in memory)


At google (at least when I was there) they have a hard rule of 120 character lines or something. (I think it varies by language).

I thought the rule was ridiculous - until I realised one of my colleagues had set his computer up with 3 tmux sessions side by side, each perfectly fitting a 120 column code window. He never had to scroll sideways, and he was very productive with his setup. The fact he knew exactly how wide to make his terminals made his workflow much better.

I don’t follow that rule on my own projects, beyond keeping line lengths “reasonable”. But that rule might be on to something.


This is why I’ve come around on the 80 character rule after switching to Emacs, fitting four buffers side by side on my 27” monitor just works.

120 characters/3 columns is great for more verbose languages, but I find it a bit too wide for a vertical split on a 15” laptop monitor.


I just enable word wrap in my editor


I don't think the fact that it worked for one guy (or more accurately, that he adjusted his workflow according to this rule) makes it a good rule by definition.


It's not that they contorted their workflow to satisfy some arbitrary rules, it's that the fact of the dependable consistent rule made it possible for them to formulate an efficient dependable consistent workflow at all.


The 80 column “rule” also discourages the use of descriptive naming, which IMO is far more important to readability than short lines.

Also 80 columns is near impossible to adhere to for more verbose language.


Descriptive naming usually isn't.

    E=m*c^2 or 
    objectEnergyInJoule = massOfObjectInKilograms * speedOfLightInVacuum^2 ?


Now imagine that you're not looking at a famous equation and you tell me which one you would rather have to sus out the intention of.

    a = b/c

or

    numApples = numBananas/apple BananaConversionRate


The first one, no question about it.


Yes. We would want the descriptive names for identifiers which have broad scope, but short names for the local variables that work with them.

I would rather give a function the descriptive name in such a way that the parameters can then be given trivial, mnemonic names.

Strawmannish example:

   incrementAccountBalance(a, n)
   {
      ...
   }
The descriptive function name gives away what a and n are, so it is unhelpful to make this:

   incrementAccountBalance(account, amountToIncrementBy)
Reasonable middle ground:

   incrementAccountBalance(acct, delta)


With incrementAccountBalance(a, n), it's impossible to tell exactly what each argument is from the function signature alone. Of course I could assume that a = account and n = amount to increment by, but that's nothing more than an assumption, which only invites inevitable bugs. Alternatively I could look at the function body, and try to figure out what's going on, but that's time wasted that could be spent more productively.

Further, non-descriptive naming often assumes that the programmer reading and modifying the code has domain-specific knowledge. That might be true of mid level or senior engineers have have been with a particular company for years, but the new grad coming into a financial services company (a field he has no training or familiarity with) will have no idea what "a" or "b" means. I was in that position, and the descriptive naming absolutely would have helped me to learn and become productive with the codebase faster.

I'd say that your reasonable middle ground would be the best practice.


Opposite of my choice of which I also have no question.


For both Python and Go I use descriptive (often long) names and ~90-char line limit, and use linebreaks liberally to reconcile the two


> Also 80 columns is near impossible to adhere to for more verbose language.

Not if you use Unicode (and Chinese language).


I settled on 100 characters as the sweet spot, with an exception for trailing punctuation being allowed to go beyond (because you don’t want to wrap a line just because of some trailing punctuation).


I remember hitting the punctuation dilemma with my self-imposed character limit. Putting the punctuation on the next line, looking at it displeased, moving it back, questioning the purpose of rules if they're not followed; racking my brain over the world's least consequential conundrum. Ultimately I decided to adhere to my limit, but to treat punctuation and the word that precedes it as a single unit to be wrapped together.


Also sometimes comments.


Nah, comments are important and should be in the main field of view. :)


Horizontal scrolling can be avoided with line-wrapping. It's not always a good way to deal with the problem, but it's useful for some edge cases.


Line wrapping is often difficult to read as well. If programmers wrap the line themselves, they can break it up logically, rather than letting the computer break lines wherever it's convenient. Further, they can respect the indentation level of the line, whereas most line wraps I've seen wrap starting all the way to the left.


Vim has a setting to make line wraps start at the same indent as the line.


Word wrap. Turn on word wrap.


In every editor I've tried it on, word wrap wraps the lines all the way to the left, instead of respecting indentation levels. This makes for very difficult grokking of control flow, both in languages with significant whitespace and C-style languages. If programmers simply wrap their own lines, they can respect the white space conventions of their language and make it much easier for other developers to read later.


VS Code respects the indentation level (provided editor.wordWrap has some value other than "off" and editor.wrappingIndent has some value other than "none"). In other words, if a line needs to be split into 2 "virtual" lines to fit in the window, VS Code inserts a "virtual copy" of the line's initial whitespace before the second virtual line. By "virtual copy", I mean that the copy of the whitespace is never inserted into the file, but is added during the rendering of the contents of the file into the window.

Emacs on the other hand, has the problem you describe, and probably always will because of how hard xdisp.c is to modify and because the maintainers of Emacs probably do not consider the problem important enough to justify a big change to xdisp.c.

I was never a vim user, but my cursory investigation led me to conclude that vim, too, has the problem you describe.


The parent assumes that word wrap, a feature we take for granted today, is even a thing in the editors/terminals of legacy systems that this 80-character rule likely targets.


> I'd say no more than half a standard monitor wide is the character limit.

So if "monitor wide" is 80 characters, like a DEC VT-100 terminal, or WYSE-50 or whatever, then we have a 40 column rule.


Have you tried formatting tools like prettier / black / gofmt / rustfmt? I used to have lots of tabs vs spaces type opinions but now “use a formatting tool and stop arguing about it” has replaced all of my style opinions. Once you have more than one person working on a project it’s just so nice to never have a stupid argument about formatting again.


It will always baffle me how some programmers believe that meetings are the bane of their existance because "they only waste time", while at the same time they can entertain the "tabs vs. spaces" (or "vim vs. emacs") discussion for hours.

I don't care about the indentation method, I just want to press Tab, or Ctrl+S and have it do the right thing. Not waste any brainpower thinking about it.


Agreed. It's a controversial opinion that opinionated languages/formatters are the way to go. :)


I pretty much agree with all of these but two.

> 1-based indexing is superior to 0-based indexing.

this breaks the identity `arr[idx] == *(arr + idx)`. as a c++ (and sometimes c) dev, I tend to consider "index" and "offset" to be functionally equivalent. hard pass.

> If you use spaces to indent code, or in any way rely on a fixed width font to "line things up", you have a fundamental misunderstanding.

I find "lining things up" to be pretty helpful for spotting mistakes in tedious code (eg, matrix initialization) where you have a bunch of similar lines of code in a row. am I missing your point, or do you just not agree this is useful?


> this breaks the identity `arr[idx] == (arr + idx)`. as a c++ (and sometimes c) dev, I tend to consider "index" and "offset" to be functionally equivalent. hard pass.

And I think that's the problem. Programmers have collectively conflated the concept of "offset from the address" and "position in the list" as the same concept, and they're not the same thing. Both concepts have their uses but they're separate distinct concepts. C and C++ don't index* a list, they simply offset it, and too many other languages that don't care about in-memory addresses or offsets have carried the concept from offset over to index erroneously.


I can see how in a higher-level language it might make sense to use 1-based indices. this is, after all, how humans typically count. I just don't agree with it as a general statement over all programming languages. in low-level contexts where you are working directly with memory, it is quite natural for indices (or idx * element_size) to be equivalent to offsets.


Depending on the context of the program, both can be more comfortable. Pointers/enumerators? 0-based Heavy math problem? 1-based

I believe there is a few languages which can let you set it


> this breaks the identity `arr[idx] == * (arr + idx)`.

Well, then, make the identity `arr[idx] == * (arr + idx-1)`.


Right. But while I'm just trying to do my work as a programmer, which of those two identities is going to be easier to work with, reason about, and come out of it with correct code? Hint: It doesn't have a "1" in it.

Your identity perfectly illustrates why 1-based arrays are worse rather than better.


That's true for C/C++, but in most languages used today where we don't use pointers, where is the advantage?

I'm hundreds of times more likely to have to write lastElement = array[array.length - 1] (there's the 1 again!) than I am to have to reason about the equivalent pointer arithmetic.


Even in a language without pointers, in a multidimensional array you can index the data one-dimensionally by array[index0 + N * index1] with 0-based indexing, as opposed to array[index0 + N * (index1-1)]. Where N is the length of the inner nested array.

And for that matter all algorithms for which zero-based indexing makes more sense are simpler, such as working with polynomials and expansions where the zeroth power is included in the expansion.

Having said that, I too prefer 1-based indexing because the list of algorithms that are written more cleanly (i.e. without constantly causing bugs when you implement them) is far larger for me personally, including all of linear algebra and related numerical methods.


In a language that has one-based indexing, can you treat a multidimensional array as one-dimensional? Or will it give you an array index out of bounds error?

That is, are there any languages that are both one-based, and will let you do that trick?


You would need it to support one-dimensional views (unless your data is just stored as one-dimensional to begin with). Certainly matlab and Julia do. Don't know of any new 1-based languages either way.


I thought we settled on lastElement = array[-1] for this problem (a little, but not entirely, tongue-in-cheek).


Yeah, let's use Unicode symbols. That lets us, for example, make it a challenge for others to figure out how to type stuff that they need to type. It lets us create variables that use characters that have almost exactly the same appearance as latin characters, but aren't. For bonus hilarity, create two versions of the same variable name, one with a non-latin-but-similar-appearance character, and the other with the latin character.

No thanks. Hard pass.


You could also use six different “right arrow” Unicode symbols, and have them each mean multiple different things, automatically inferred by the compiler depending on context! See the Lean mathlib ;-)


Indeed. If there's no button to type it directly on my keyboard, I'm not interested in having it in my code.


Not all keyboards are the same...

!@#$%^&*()_+}


That's why I said my keyboard. I'm not aware of any keyboards that would let you type the arcane runes used in APL without resorting to ALT-sequences or the Compose Key.

On a related note, if I could get a keyboard with an extra 50-60 keys that would send the proper scan codes to type unicode characters in one keypress that would be awesome.


I'm a big fan of unicomp keyboards, heavy steel buckling spring keyboards still made in kentucky. They make one with 24 function keys I suppose you could map.

https://www.pckeyboard.com/page/product/UB40B5A

Then there are the point of sale keyboards, this one has 60 keys you insert your own paper labels.

https://www.officedepot.com/a/products/361528/CHERRY-SPOS-Sm...


I don't know a lot about how keyboards work at the low level but wouldn't that require custom firmware to add new keycodes?


Well, you can either have software running on the machine that interprets "F15" as a particular key macro, and the keyboard doesn't care what you do with it, OR, yes many keyboards (especially the RGB variety) have tools to update the keyboard firmware to send different keys no matter what computer you plug it into, programming stays on keyboard. (often a simple arduino chip)

In this instance the Unicomp keyboard is not reprogrammable, while the Cherry is fully programmable.


Clojure is a highly practical Lisp that retains most of the Lisp mystique.

Previous Lisps were impractical largely due to lack of libraries, but Clojure comes with first class Java interop out of the box. But you still get macros.


I know you're joking, but the key difference is that OP's "controversial opinions" are high impact, whereas yours are not (e.g. spaces vs. tabs, 1-based vs. 0-based indexing). I don't think most of the opinions in the post are that controversial. Everyone who has ever worked for a fast-growing business knows it's OK to write garbage code sometimes, that some engineers 10x others, that unit tests are good, that SQL is to be treated as code, etc.


Spaces for indentation work perfectly fine without fixed-width fonts. If you indent code with leading spaces, things will "line up" with any font.

By the way, what's the misunderstand with lining things up?


Spaces work well for indentation. You could argue here that typing one character (tab) is more efficient and maintainable than writing N (also semantically more powerful).

The real problem comes when people start aligning things on the middle of the line.


I think the solution is to rely on hanging indents rather than trying to align with the column of the previous opening parenthesis, bracket, etc.

So instead of this:

  def some_method(param1
                  param2):
      statement1
you do this:

  def some_method(
          param1,
          param2):
      statement1
Hanging indents work regardless of whether you use tabs or spaces for indentation.


My IDE types 4 spaces when I hit tab shrug


I'll demonstrate using periods due to the forum eating spaces. The parts after the dots line up in a monospaced font.

Fixed width:.good

Flexible:....not good

.............so not good


This doesn't make any sense. Indented code does not have non-whitespace characters to its left.


Sure it does.

  const ONE      = 1;
  const MILLION  = 1_000_000;
Even without non-whitespace it's a problem:

def foo(arg1,

........arg2,

........arg3)


Your first example isn't what I'd call indentation. "Alignment" maybe. I don't think anyone would expect that to work in a proportional font.

For the second one, I'd write it this way.

def foo(

....arg1,

....arg2,

....arg3)

I prefer that even in a fixed width font.


This still matches the parent's picture

> If you use spaces to indent code, or in any way rely on a fixed width font to "line things up", you have a fundamental misunderstanding.

As long as the limitations of lining things up "exactly" are avoided, spaces shouldn't be any different than whatever way comes from true understanding.


I do use spaces to indent code. And I still have no idea what my fundamental misunderstanding is supposed to be.


My take on it is that (most) code isn't a picture, so drawing with it (alignment especially) is missing the point that code is a form of text instructions.

If anything, the editor should "render" the text in a form that is pleasant to the eye, divorcing the meaning from the presentation.

Lining up things too much is a "works for me" way of baking presentation into the data, where the writer assumes that because it looks good o their setup, it will look well for the reader too.

Similarities can be seen to building web pages on tables, sending around Word documents, or hard-wrapping emails.


> the 80 column "rule" is as stupid in code as it would be in prose

I follow the 80-column rule in prose.


Emacs was a godsend for my 16 year old self.

It's an immensely self-documenting and self-explanatory system, and vastly different in that regard than anything else.

(Though once you go down the rabbit hole you eventually start to realize that elisp is a crappy way to program and the whole thing has a big-ball-of-mud architecture... Oh well.)

Point is, your preconceptions, from the heights of your experience, of what "newbies" find intuitive are likely very wrong.


> Vim and Emacs both have terrible user interfaces, and are an inferior experience to even a normal text editor let alone a decent IDE. Users have Stockholm Syndrome.

Well yeah, the UI isn't great. The whole point is to make it your UI, no?

I agree that Python isn't great for beginners, but I don't think BASIC is great either beyond a simple introduction to how we make computers do our bidding.


I learnt C as a first language, but think Python is a great language for beginners!

Forces okayish formatting, error codes are pretty readable, good library support for doing everyday useful things (for people not in the cs community), you don't have to strictly worry about type for your tiny scripts...

Why do you think it's not great? (in all sincerity)


Python teaches people that Python is a good way to do software.

The indentation is nice, but the language is not much more high-level than C, teaches bad habits by not thinking in types and granular interfaces, and encourages a counterproductive style of defensive programming - i.e. C++ is an awful language, but at least I can encode basic information about my code in the type system. Python trades that away so I have to write tests for the entire surface area - inside and out - of my code.

If there is something fundamentally wrong with my code, tell me now, not when it's half way through a job or worse attached to spending money.

You can write bad code in any language, but bad languages influence the way you think about programming. For example, C++ encourages you to think about side-effects in the language like SFINAE, D encourages you to metaprogram everything, Rust encourages you to think like a compiler (lifetimes), Haskell teaches you how to manage stateful code in a controlled and measured manner, etc. etc.

In conclusion, Python is not a terrible place to start (JS takes that award to my eye), but the progression to writing genuinely reliable and fast code is too obfuscated. I'd rather we teach people C. C requires a helping hand, you can fumble around with Python.


Now we're talking!

Although given that I, random internet stranger, agree on 6 out of 8 of your points, it mean that they aren't too unpopular. :)


>Although given that I, random internet stranger...it mean that they aren't too unpopular. :)

there is a common perception that programmers often don't get statistics. Not that I do either, but just saying.


controversial opinion: the need of statistics and hard math for programmers is overrated :)


really controversial opinion: the abilities of programmers to use statistics and math is just right!


> Let the flames begin!

Yeah, it seems that your evil plan works. People just can't stand it despite your clearly stated intent.


"Give your controversial opinion" always, always just leads to to people saying the thing they know people they like will agree with and, and that they imagine people they dislike will disagree with. In practice, they end up just saying things that are entirely uncontroversial and immensely boring.

You, though, you're getting there.


I agree with every item in principle, and disagree with every one in practice. Here's to a better world!


>If you use spaces to indent code, or in any way rely on a fixed width font to "line things up", you have a fundamental misunderstanding.

What if I just like the aesthetics of consistently-indented code?


Can I marry you?

I will add:

* Tabs > spaces

* Proportional > monospaced

* White background > dark theme


White background if there's a window near-by. Dark background if in a dimly lit space.

I don't know if I use tabs or spaces anymore. I hit tab, but my IDE just sort of makes it up from there. Works well enough.


Visual Studio (not Code) handles it quite nicely. If you hit tab and it replaces it with 4 spaces, then you backspace, it removes the 4 spaces as if they were a tab. So it saves with spaces, but treats them like tabs.


Though I hate what you say, I will defend to the death your right to say it.

But holy cow the white background made me angry hahah


I've been switching over to tabs to mark indentation levels and spaces for alignment after that. I used to use all spaces. Disagree completely on the proportional fonts, but maybe I just haven't tried one. Code presentation is important to me, since it makes it really easy to grok the structure of what's going on and to find some simple errors if you align certain things together.


Fluorescent white is for accountants and managers.


Can you elaborate on the fixed-width fonts point?


As long as source code resides in plaintext files, writing in columns (i.e. vertically aligning blocks of text) seems like an excessive endeavor with little gain.

We are not typesetting; formatting resources should be kept to a minimum to make the code easy to handle in plaintext. Fortunately, code is restricted enough that we can do pretty well without those.

You don't write spaces to align text in Word: it's a nightmare to maintain and the software provides better tools for that purpose. Why do we still write code as if we were using a typewriter?

You may think you are improving readability by aligning columns of text, but then, the impossibility of standardizing such baroque formatting rules goes against that. Furthermore, you are preventing people from using a variable-width font, which is actually better for readability.


I believe you'd like elastic tabstops: http://nickgravgaard.com/elastic-tabstops/


I love them! It's a shame they didn't really catch on.


So you’re against any indentation? I don’t think you’ll find many programmers at all who agree with that statement.


Indentation is OK, sure. I was talking about aligning stuff on the middle of a line.

I find tabs better for indentation but in practice end up using spaces.


If you are interested in experimenting with non-fixed width fonts for programming, you could try Source Insight. I was at a company that used it for a while and surprisingly, I didn't miss using fixed width fonts at all.


* 1-based indexing is superior to 0-based indexing.

1-based indexing is a special case of 0-based indexing.

* If you use spaces to indent code, or in any way rely on a fixed width font to "line things up", you have a fundamental misunderstanding.

Fundamental misunderstanding of what? Not having shit PRs on Github?

* Relatedly, the 80 column "rule" is as stupid in code as it would be in prose. Use your aesthetic sense.

Yeah okay true

* Vim and Emacs both have terrible user interfaces, and are an inferior experience to even a normal text editor let alone a decent IDE. Users have Stockholm Syndrome.

Also true

* For beginners, BASIC is great and Python is terrible.

The language for a beginner is irrelevant, the only metric that matters is: Are you enjoying your coding experience? Keeping students coding is all that matters the language is irrelevant.

* There are many languages with a 'mystique', like Lisp and APL. The correlation between 'mystique' and practical utility is inverse.

This is almost a tautology, of course mystique will lead to less practical utility because their is simply less interest, less tooling, and less understanding about how to use the tool.

* Source code should liberally use Unicode symbols.

Someone has never looked at Agda source code before and it shows.

* Sigils to indicate variable type are actually pretty great, especially if enforced by the compiler. (See above regarding BASIC)

Gross.


zero index makes sense because (and i will use C syntax here)

a[b]

is equivalent to

*(a + b)

with a being a pointer and b an integer. this can be seen as well in the fact that

a[b]

is equivalent to

b[a]

where in both cases a is a pointer and b is an integer.

Thus, the index into an array is fundamentally the same as an offset into that array. Usually, in languages that do use 1-based indexing (like in pascal strings), the 0th byte is used for something else.

The point being that thinking in terms of 1-based indices means that you have to use a system where addr+0 is invalid, and the first element of an array at address X is X+1.

Now you could just shift everything by 1 and just declare that +1 means +0, but then you lose all meaning to the offset vs index and ahhhhhh


In my view, the fact that a[b] is equivalent to *(a + b) in C and C++ (in most languages with arrays the latter notation is meaningless) is a sort of a neat coincidence which we shouldn't feel bound to in language design in general.

There are many reasons to depart from this idea in languages far removed pointer arithmetic. For example, Julia has 1-based indexing and I appreciate it for that when doing computational mathematics. It is closer to familiar notation: A[1][1] is the element of matrix A in row 1 and column 1. In a language with zero-based indexing, it is sometimes easier for me to allocate an extra row and column, wasting memory, than to shift indices and risk an inevitable off-by-one somewhere.


"For beginners, BASIC is great and Python is terrible." I am curious about this. Do you mind elaborating?


Consider the the classic koan:

  10 PRINT "Retardo_88"
  20 GOTO 10
To fully grok this, the concepts you need are: the idea that you can instruct a computer to do things, the notion of source code as a list of such instructions which will normally be executed one at a time, in order, and - crucially - that the instructions can reference other instructions and that this is the fundamental leap that makes computers interesting. In other words, Turing Machines 101. It's a perfect demonstration.

Now consider the simplest equivalent program in Python:

  while(true):
      print("Retardo_88")
In addition to all of the insights mentioned above, you need the concept of higher order control flow statements, the concept of nested execution, the concept of boolean literals, the concept of a function call... there are so many questions that are hard to answer. Why do I need parentheses? Why do I need the colon? What's the deal with the indent? What gives with the weird grammar of "while true"? Etc...

It gets worse the moment you want to do anything nontrivial and have to wrestle with objects, methods, and dot-syntax. BASIC sends the message that computers are simple, easy to grok things that follow simple, logical rules; Python sends the message that they are a bottomless well of dizzying complexity.


I would argue that a more measured alternative to your conclusion is “people should learn computers from the bottom up”, starting with direct exposure to a program counter and ending with high abstraction. I’m not sure this is the best way to learn, but I think it would be a lot better than what we teach now.

A great intro CS curriculum could be some RISC assembly -> C -> Lua -> Haskell -> Agda, maybe also branching off into lisp and prolog.

If you wanted to get really extreme, you could even tack on “Discrete transistors -> logic ICs -> FPGA ->”


As a perpetual beginner, I feel BASIC is under-rated. We should use BASIC to start children off with computing. They'll learn the hairy stuff in due time, but the key is to get them hooked early; and BASIC does an awesome job of that.


What was your first language? Mine was Basic.

Pascal was the first language I was taught in.

I still have not (after 40 years) learnt Python... Hang on, when did Python start....That long


"Modern" BASICs/Pascal are also otherwise decent languages that suffer from a reputation of newbies writing bad code with them.


I’m triggered


>The correlation between 'mystique' and practical utility is inverse.

lol tell that to the banks and trading firms that pay Arthur Whitney millions for his APL interpreter that looks like something out of the IOCCC


Vim is not a gui aka no user interface. I know you are joking though: vim > emacs > any IDE!


emacs is essentially a framework for building custom IDEs.


> 1-based indexing is superior to 0-based indexing.

Yes! But programmers are weird like that,I understand the rationale in case of C(Nothing that couldnt be solved then and there) but for high level or scientific computing? How many off by one bugs caused by this?

> If you use spaces to indent code, or in any way rely on a fixed width font to "line things up", you have a fundamental misunderstanding.

Agree, that's one of the things I hate about python, the indentation are probably the worst solution to the problem of scoping code.

> Relatedly, the 80 column "rule" is as stupid in code as it would be in prose. Use your aesthetic sense.

Agree

> Vim and Emacs both have terrible user interfaces, and are an inferior experience to even a normal text editor let alone a decent IDE. Users have Stockholm Syndrome.

Agree, both are very good pieces of software but I feel 99% of the new people swearing by them just do it to try to appear l33t

> For beginners, BASIC is great and Python is terrible.

In 1995? Perhaps. Now? Hell, no.

> Source code should liberally use Unicode symbols.

Hell no.

> There are many languages with a 'mystique', like Lisp and APL. The correlation between 'mystique' and practical utility is inverse.

More or less, great, but difficult, programming languages would never become mainstream, and like good old Joe Stalin used to say "quantity is a quality on its own"

> Let the flames begin!

As always with these type of comment OP is not as alone as they think they are.


Not sure why 1-based indexing is more logical in high-level or scientific computing.

The concept of x[k] meaning the object k spaces away from the beginning of x is very general, and still intuitively meaningful in languages without pointer arithmetic.


> Not sure why 1-based indexing is more logical in high-level or scientific computing.

Because mathematics and physics use 1-based array index since ever? I first learned about matrices in math classes, then in numerical methods (All 1-based) and then we programmed in Fortran (1-based).

It is not that is more logical, it is just a convention. But I have seen wayyyy more people complaining about 1-based index languages than about 0-based languages.


> “What’s your most controversial programming opinion?” ... What follows are twenty of the highest voted answers

By definition, the highest voted answers are also the least controversial. "Anti-establishment" and "populist" maybe. Unpopular? Certainly not.

My controversial opinion is that most software tests should be integration tests that use randomized inputs. I doubt I'll get very many upvotes for that idea.


> My controversial opinion is that most software tests should be integration tests that use randomized inputs. I doubt I'll get very many upvotes for that idea.

Probably not, but largely because you used the word "randomized". One of the most annoying things to deal with in a CI workflow is flaky tests (where something unchanged will randomly fail, or re-running will result in a different outcome).

The most common way I've seen this happen is with an external dependency like using a database server or calling a 3rd party API. The failure is usually temporary (eg: timeout) and so it's a false negative. This trains developers to do a lazy workaround: re-run the build until it passes.

True "randomized" data has the possibility that you get all kinds of flaky tests, but that they cause false positives: the failures are legitimate, passing is not.

Pseudo-randomized [1] data is a different story (see: fuzzing [2]), because you still have predictable tests -- the difference is a human doesn't have to think up thousands of test cases. I'd say it's also important to add specific test cases when you do find a specific type of failure, just to be sure that always gets tested.

I think if you call it "pseudo-random" or "fuzz testing" you'll find it's much less controversial.

[1] https://en.wikipedia.org/wiki/Pseudorandomness

[2] https://en.wikipedia.org/wiki/Fuzzing


> One of the most annoying things to deal with in a CI workflow is flaky tests (where something unchanged will randomly fail, or re-running will result in a different outcome).

Just records the input whenever the test fails, and have an option to re-run a test, with those inputs. The tests are still flaky but it doesn't matter if we can reproduce the results.

You don't even have to record all inputs. You could just use a random seed, and record the seed at each run. Test become deterministic again simply by fixing the seed (typically to a value that caused the previous run to fail).


Using a seed with the "random" is exactly how you get to pseudo-random, which gives the same sequence everytime, and is absolutely fine for tests.

If you want to have the build (test) result feedback to save the input, that's an ok goal but I think it's hard to pull off: now you have your build making commits to source control (causing another build), or you have test data stored somewhere else not in source (when really, it should be part of source).


I'm not intimately familiar with "fuzzing" techniques, and by "randomized inputs", I am indeed referring to tests that could be flaky in the presence of bugs. Organizational discipline is indeed needed here, in order to not ignore the occasional flaky failure, and to investigate the cause of the flaky error. I believe the benefits outweigh the drawbacks.

Here's a link that talks about the above in more detail - https://software.rajivprab.com/2019/04/28/rethinking-softwar...


I use fuzzers extensively in software I write. I find it quite enjoyably humbling how quickly a newly minted fuzzer finds bugs.

I use a seeded random number generator. During development I seed it randomly. When crashes happen I output both the seed and iteration so I can hardcode that seed and reproduce the problem. And then in the CI I’ll often fix the seed so the test suite is stable.

But it’s also good practice to run fuzzers outside the context of a CI. You want your CI tests to be fast and stable. Fuzzer tests can also be slow and thorough. Foundationdb has dedicated hardware that runs their fuzzer 24/7 and spits out any errors it finds. If that fuzzer finds problems it doesn’t break the build. But someone will take a look, reproduce the problem, hopefully convert the bug into a unit test and fix the issue.


This is completely fine, have a separate test set for these not in the deploy pipeline which is about preventing regressions not finding bugs that were previously missed.


Agree, this is the way to go.

On a separate note, if the fuzzy tests find something wrong, the data should definitely be integrated in the stable test suite to ensure non regression.


One of the most annoying things to deal with in a CI workflow is flaky tests

I guess another controversial opinion: this is a problem with the idea of CI (or at least how we work with it and what we expect to get out of it), rather than the idea of randomized tests.


In what way? My personal expectation of CI is predictable, repeatable builds, that give me some assurance the software is working as designed. I also like that it forces everything to be scripted: no "only Bob knows how to build the release file."

Flaky tests are an indicator of poor code: maybe it's your actual code, maybe it's a bug in the test code, or maybe it's external dependency + lack of error handling in the test code but there's a problem somewhere.


The presence of bugs doesn't necessarily indicate useless software. If tests are failing (or flaky...), that's probably something to look at at some point, but that doesn't necessarily mean it's the highest priority to look at. In most places where CI gets deployed -- at least in commercial environments -- there seems to be a goal of making test failures a non-maskable interrupt.

I did admit this was controversial! But it fits in with a more general view that there are a lot of tools which make good servants but poor masters. Auto-builders are a good thing, partly because (as you say) they can help to clarify what is required to make a build, and partly because (especially for dependency-heavy software, which seems to be the norm nowadays) they can help catch things quickly when the dependencies shift beneath you. Making them a hard gate on releases seems a little too close to making the tooling your master, though.

(Somewhat separately, I also worry about CI acts as a hiding place for complexity. Sometimes it reaches that point where nobody knows how to make a build without the CI tool any more. Then local testing and debugging becomes difficult.)


You have a good point. I think the danger comes from the potential for abuse. Once you ship software that has failing tests, you've established a precident and are likely to face pressure to do so again even when the tests are more critical. That's why I'd resist that, at least. My experience is if there's a real problem and later everything in production is on fire, no one cares about the caveats or risks you pointed out.

I do also agree that CI can hide complexity, but that's true of any tool as well as non-CI builds. Compared to a human running things, it's at least somewhat self-documenting due to being scripted.


> By definition, the highest voted answers are also the least controversial

That's not correct because it leaves out motivation to vote. Imagine a controversial question where, say, 1/3 of the population feels super strongly and is likeliest to vote on it. Their view will shoot to the top. Then the other 2/3 will go "wtf is wrong with this community".


> most software tests should be integration tests that use randomized inputs.

Good for fuzzing, bad for determining whether the output is what you expected it to be, because finding out what the output should be is .. the reason you wrote the software.

I've seen this done with an automated spec translation tool that produced an unoptimised but correct H264 decoder and set of reference inputs to achieve coverage of the actual target decoder.


> Good for fuzzing, bad for determining whether the output is what you expected it to be, because finding out what the output should be is .. the reason you wrote the software.

The solution for this is property based tests. Outputs almost never have maximum entropy. They have invariants, and they're related to the inputs in various ways.

Trivial example: authenticated encryption. The input is some random text, some random key, and some random nonce. The result is a message. One property to check is the fact that the message can successfully be decrypted with the same key, and the resulting plaintext will be the same as the random text from the beginning. Another property is that any message that is not the message you computed will fail to decrypt (assuming you pretend not to know the key when as you corrupt the message). Testing (crudely) that you can detect forgeries (or at least accidental corruptions).


That's more popular than you think.

If you said php is a pretty good language and I enjoy working with it you may get a lot of closet php developers upvoting you.

A real unpopular opinion is testing is not worth the effort.


Yeah, it doesn't really read like a list of programmer flame-bait which I think is ultimately what the title implied.

I mean consider this example of a "controversial opinion":

> Print statements are a valid way to debug code.

I'd place this about as controversial as "functions with many arguments are an ok choice".

The fact that there is neither a statement about the proper indexing of arrays or which programming language to never use is clear evidence this list isn't about controversial topics.


I prefer functions with no inputs, quite honestly . No outputs either. Also no internal logic. To hell with variables as well.


Back in the day, compiling a Fortran program that didn't do any input or output would get you a "Program is trivial" message (and nothing else).


Excerpt from The Heart Sutra, a TL;DR translation for Programmers


It depends on the context I think. Library code (most library code anyway) should have unit tests covering the public api at minimum. Unit tests might also be a good idea for interpreted/highly dynamic languages like Python, if only because you need to implement something like a compiler’s checks on signatures and arguments by hand in order to keep non trivial applications sane.

But generally I’m with you. Unit testing is overused and often makes code harder to read maintain and build on. That’s aside from it not providing information most unit test advocates think it provides.


A bit like how /r/unpopularopinion on Reddit ends up being full of popular sentiments.


> A bit like how /r/unpopularopinion on Reddit ends up being full of popular* sentiments.

*racist


Yeah, I vehemently agree with every single opinion listed. I would add that comments aren't just extra, they are often outright lies.


> Programmers who don’t code in their spare time for fun will never become as good as those that do.

I live my life by the "4 good hours" rule, inspired by the book Deep Work by Cal Newport. If I can put in 4 good hours of deep work where I am iterating and learning on a skill, I consider the day a success.

This rule is specific to improving at a new skill in flow state. This does not mean I blow off the rest of the day or refuse to answer emails after those 4 hours. I still exercise, answer emails, write documentation, cook dinner, etc. It just means I choose to be kind to myself after I've achieved those 4 good hours.

After you work at a job for a while, things can tend to be come stale and you stop learning. I have a novel approach to this: I find a new job that will actually be challenging. This is actually harder than you think!

In Deep Work, Cal estimates that you have 4 good hours of learning in you a day. Hell: You probably have less. The four hour rule refers to virtuoso violin players[0] who I am willing to admit probably have more talent and dedication than my stoner self.

0: https://www.johndcook.com/blog/2013/02/04/four-hours-of-conc...


I've used time tracking apps for almost a decade now. I strongly agree that 4-5 hours deep work is a solid accomplishment in any office environment. Even on days where I spend 90% of my time in productive apps and don't have any meetings, it's still surprisingly difficult to pass 5 hours of productive time in an 8-hour work day. It's amazing how much time disappears to things like lunch, bathroom breaks, "quick" phone calls, and other breaks.

That said, I think it's a mistake to believe that there is an upper limit on productive hours in a day. I'm part of a mentoring group where I see this idea stated as fact by a lot of juniors. The catch is that they've all read different sources that put the upper limit at anywhere from 4 hours to less than 2 hours per day. This leads to a self-fulfilling prophecy where they stop trying to work after they've reached their pre-determined target, whatever that is.

When working alone or in isolation, I can often put in 10+ hours of measured, productive time in a day. The key for me is to do it in 2-hour chunks, with 1-hour breaks in between to recharge by playing with the kids, cooking and cleaning, exercising, and so on. I wouldn't recommend doing this consistently for an employer, but it's been a huge boost for hourly consulting and working on my own business.


You know your stuff about Deep Work. I'm working on a site that helps people track it.

Would you be open to giving feedback as an experienced deep worker?

My goal is convert people like you as opposed to people that have never even heard the term (at the beginning).

trywinston.com (you'll get a bounceback w/ my email for feedback if you sign up)

No worries if not!


There is the 4 hour a week movement.


The problem with this is deep, intellectual work on the level of programming typically needs to be surrounded by communication loops. Minimum of six ours a day seems about as reasonable as it gets...

It would have to be actual work though. Do we all honestly think the average developer would genuinely be able to set aside all the internet distractions and put their 6 hours in (2 hours communication + 4 hours development)?


My (slightly controversial) pet peeve is the abuse of the notion that "premature optimization is the root of all evil." Sure, let's not prematurely optimize, but let's also not use that as an excuse to make horrible decisions or unforced errors also.


The full quote is much much more reasonable:

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%"

Unfortunately everybody seems to forget about the last bit.


I think the first bit, that is also always dropped, is also important: "We should forget about small efficiencies". Removed from that context, it has morphed into an excuse by some people to not pay attention to efficiency at all, even when the potential gains are large for a relatively small amount of work.


There's also a tendency to forget the impact of framing.

One or two micro-optimizations may not add up to much. But a habit of, say, avoiding unnecessary object churn, might add up to a very large efficiency gain, a few microseconds at a time.


Especially considering the source: Knuth was a ruthless micro-optimizer, but his micro-optimizations actually made sense.


I agree but for a slightly different reason: I think the notion is also highly subjectively applicable... some types of problems really do need a lot of forethought to make efficient - waiting until later can cause significant problems when decisions are hard to pivot away from or undo. An easy example is DB design, the cost neglecting these considerations upfront can be massive and sometimes impossible to migrate away from.


I agree with both of you. This quote is often used as a crutch on which to justify poor planning. Certain types of systems are very hard to "add" performance to as an afterthought and really need to be designed around it.


Indeed, when making a tech decision it is important to think about how expensive it will be to change that decision later on. Avoiding premature optimization isn't an excuse to carelessly paint yourself into a performance corner.


In my experience few people think that optimizations are to be avoided at all costs. Whereas it's very very common for people to try micro optimizing their code as they write it. Generally speaking it's much easier to profile and refactor well written but potentially slow code then it is to try to understand and untangle a mess of "optimizations" that are also a performance nightmare.

Admittedly this may depend on the domain you're working in. And of course some people may take the advice to an extreme in the other direction but this is (IMHO) rarer and much easier to correct.


I would say that the general trend these days is definitely in the other direction outside of some niches.

Performance is a design issue and it is hard to do as an afterthought. In fact often microptimizations are the only optimizations you can do without changing your design.


I'm certainly not against micro-optimizations. But they are very context sensitive and cannot be done without proper profiling of the whole program using representative workloads. I find people are often writing spaghetti code because they heard it's "faster".

That said I would draw a distinction between implementation and design. The surface API needs to be well thought out before hand. But this should be done before writing any real code (exploratory code aside).


Yes, but evil is a polynomial and has many roots, some of them are complex. – dan_waterworth

(from comments of https://softwareengineering.stackexchange.com/questions/8008...)


It's an incomplete quote, that mediocre programmers use as justification to be sloppy and lazy. The result is pulling in thousands of dependencies to make something that runs 100x slower and uses 100x more memory.


True, but if your evaluating the "success of software" by the code you have already missed the mark. IRL, the cost to build & maintain the software is almost always the determining factor in if the project is a success or not. Building off the shoulders of others is the only reason 99% of projects can be viable.

value out > value in


> Sure, let's not prematurely optimize, but let's also not use that as an excuse to make horrible decisions or unforced errors also.

That's similar to my reaction to "Unit testing won’t help you write good code." Of course it won't... but you are just using it as an excuse to not test you code at all.


Gonna add my own to this guy's (imperfect) list, "Engineering managers should be able to write code" https://qvault.io/2020/07/14/your-manager-cant-code-they-sho...


This is a controversial opinion on HN only because of how many managers & product people there are here.

Of course engineering managers should be able to write code.


Yes I think it's a huge benefit to have managers who really understand software from personal experience. I have been in teams with non-technical management, and my experience is that so much time is wasted on irrelevant details. Often managers will ask the wrong questions, and do things the hard way because they're unaware of how the requirements could be modified a little bit to make the whole system work a lot better, and a lot easier.


The responses are telling of how controversial it is, as if people are trying to make excuses for why an EM should not be able to write code and how that's OK. It's not.


I suspect there's also great deal of "If a tree falls in the forest and no one hears it, does it make a sound?"-effect going on. Would you consider a secretary who coordinates a engineering team and handles communication between them and the rest of the company, but doesn't make any technical decisions to be a "engineering manager"?


A good manager/leader doesn't need to be competent in the field, but needs to be aware of their incompetence and know how to defer to those who are competent.

A good manager with a good team that includes one or more senior technical people will be able to rely on their guidance and lead the team effectively.

But any non-technical manager with a junior team is a recipe for disaster.


A good leader doesn’t ask people to do something they themselves wouldn’t do. If the leader has no clue how to code or debug then they have no clue what they are asking their subordinates to do. That is a horrible way to lead in my opinion.


That's nonsense. A CEO cannot be a jack-of-all-trades and has to lead a large organization and direct many people to do things that they, themselves, cannot do.

And scaling back, a manager of a cross-functional team is at a similar disadvantage. I work in a field where we mix ME, EE, CS/Software Engineering, IT/Ops, and other fields. A team could consist of people from all disciplines. Hell, right now I am learning the basics of astrodynamics so I can write the software related to it, the lead is an AE and can't code to save their life. They're still an effective leader.

In a previous team, the lead could not manage a Linux server, but still had sysadmins in the team. Did that make them an inherently bad leader? No, they deferred to the expertise of those team members. Like a good leader.


A bad manager will do much better if they are competent in the field they are managing. And there are plenty of "bad" managers out there.


If the manager can't code, the second best option is if they can't use the computer at all.

The worst situations are with those in the middle. "I know what the computer is: a list of PowerPoint documents uploaded in SharePoint, so that's what we're going to do. Also, I don't really understand what you guys do, but here is a new Jira where you have to document it step by step. Then I will read your descriptions, and regularly schedule meetings based on my misunderstanding of what you wrote."

In fact, imagine a manager what wouldn't even go to work, and just let the engineers work alone. Ironically, that would be more Agile than what 90% of companies do.


Sort of agree, it's enough if they could at some point. If they keep an ear to the ground about how things work and what's possible, that goes a long way.


> Sort of agree, it's enough if they could at some point

Managers are the last people I'd want to learn on the job this heavily - they can't start from not knowing coding if they want to manage engineers.


Just to clarify, I meant that if they at some point knew how to code, that goes a long way to understand the mindset. It doesn't matter as much if they could code their way out of a paper bag if you put them in front of a modern toolchain.

A general doesn't need to know how to fly a fighter jet. But it helps if they stay up to date on modern tactics.


Or at least they should be able to understand the project from a technical point of view.

One of the most frustrating things working with non-technical managers is their lack of understanding of the effort needed to accomplish a task. Oh you need to re-write an entire library from scratch, work with Joe and have it done by Wednesday (today is Monday).

Now good luck explaining and convincing them that two days is not enough.


I really don't understand how companies get in the position where they think it is a good idea to have people who hardly know what a "library" is managing engineers.


Because managing people is a different job to development. Chances are a manager who can't code who is a shitty manager would still be a shitty manager if they could code.


Why should someone who has no clue how to code or debug lead people doing those activities? It’s like the seeing be lead by the blind.


So they can do more targeted micromanagement?


No, so they can provide leadership that's informed and helpful.


They aren’t informed if they can’t code or debug. A leader shouldn’t ask people to do something they wouldn’t do. I’m this case the leader can’t even do what they are asking others to do. In my opinion, they shouldn’t be the leader then.


Is this so controversial?


Premature Generalization is worse than Premature Optimization


Related: Asking performance questions at the design stage is not a form of Premature Optimization.


Yeah totally. Abstraction should be a response to code duplication, not a hedge against imagined future requirements.


Abstraction should not be a response to imagined future requirements (though there is a fuzzy boundary between imagined and reasonably-foreseen near-future requirements: you don't want to spend effort solving problems from the hazy future, but you also don't want to be doing major rework in a month because you didn't take reasonable steps in anticipation of something clearly anticipated but not fully solidified), but code duplication isn't the only trigger. Problem decomposition often results in reusable abstraction as a natural consequence.


> Premature Generalization is worse than Premature Optimization

I don't think that's true, I just think premature generalization is more common among people who have internalized the maxim about premature optimization.


WET (write everything twice) > DRY (don't repeat yourself)


As someone sometimes guilty of both, agree completely.


Almost none of these are controversial now (and the one about testing probably isn’t based on my own anecdata, but it should be), but I certainly remember many of them being controversial at the time they were posted. It’s fascinating as a sort of time capsule, and revealing that programming attitudes do still evolve.

Of course we know this. We can observe it in the huge uptake of static typing and functional/immutable approaches in areas they were uncommon not so long ago. We can see it in the huge rifts over whether/how much to use JS on the web or how to build cross platform UI. But it’s interesting to me to look back and see how dramatically some attitudes have changed on topics that seem almost banal now.


> Given the area of a circle is given by Pi times the radius squared, write a function to calculate the area of a circle.

> Amazingly, more than half the candidates couldn’t write this function in any language

I don't think this is true. Or, they were interviewing candidates fresh out of college who never really paid attention in any classes. Or, the problem was presented to the candidate in an overly complicated fashion (which is weirdly common in technical interviews) and the interviewee was confused, although that seems unlikely.

I think it's easy to get wrapped up in this mindset of "oh we need super complex algorithm whiteboarding because candidates nowadays don't know how to code!" when you hear stories like this, and it's just not true -- at least in my experience of interviewing candidates. Maybe I've just been lucky.


There is a reason why fizzbuzz is a famous interview question. There are plenty of people who can't code (to any reasonable standard) but still apply to programming jobs. They might be fresh out of college, or they might come from a job where they were doing glorified IT or making Excel spreadsheets.

"super complex algorithm whiteboarding" is obviously not the answer. You want something so simple no qualified person could fail even under stress, but hard enough that you know that the person is confident in writing if-statements. "Iterate a B-Tree in-order" filters too many people, "compute the area of a circle" is maybe a bit on the easy side.


Sigh. I tried using fizz buzz as an interview question for awhile and never got a single applicant who could solve it. Some just had no idea how to do much of anything but it turned out almost no-one had ever even heard of the modulo operator before.


The point of FizzBuzz is to see if the person understands the basic concepts of a loop and "if" tests. You're supposed to give them the modulo operator to ensure you're testing the right thing.


I don't agree. I think a module operator is common enough that it should be known, and even if they don't know it, they should be able to quickly make a module function. (Something like a while loop that decreases value a with value b while a>b and returns a)


You say you don't agree, but you don't disagree either. Perhaps you should know a modulo operator but that's still not what FizzBuzz is for. It's to see if the candidate has the most basic concepts of loops and tests.

Adding knowledge of a relatively obscure operator (can't remember the last time I needed it in real life) just makes it a trivia test.


Asking a candidate to describe 2 different ways to do modulo is an excellent interview question, imo.


> "Iterate a B-Tree in-order"

I'm not sure I could do this without a B-Tree class to look at, people say "b-tree" and really mean one of like 20 possible designs.


To be fair, if the interviewer is doing it properly, either they'll give you the data structure as part of question, or they'll let you pick whichever B-tree design is convenient.


Controversial opinion of my own: not being able to solve fizzbuzz doesn't necessarily indicate lack of programming ability. The "trick" there is the modulus operator, and I can easily imagine somebody who was otherwise a good, experienced, effective developer never having come across the modulus operator.


I cannot imagine a situation where someone is a good, experienced, effective developer who has never come across the modulus operator. I just put about 6 seconds of thought into it and you could also program:

* Fizzbuzz with a loop that has a recursive call inside. * Fizzbuzz with a loop that has two additional, resetable counters

I don't even work as a developer and managed that in less than one minute of thought. Any experienced developer who cannot program Fizzbuzz isn't good.


I would say not being able to solve fizzbuzz without the modulus operator would be an indicator of lacking programming ability.


> I can easily imagine somebody who was otherwise a good, experienced, effective developer never having come across the modulus operator

I can't.


> Most comments in code are in fact a pernicious form of code duplication.

Man, I used to work with a developer who, in basically all respects was a better developer than I am. But he was terrible for adding these type of pointless comments. When I'd point out in a review that his comment used the same 3 words as the code did in a different order he'd get all defensive.


I'd rather my colleagues repeat themselves in their documentation comments than provide none at all. So I think the habit of writing those comments is more important than making sure that every single one has no repetition. I'm working in a codebase now that has a frustrating lack of documentation of important details.

A data field called "imageURL" that's a string -- is that a remote or local URL? Where did it come from/how much can I trust that it's well-formed? (And why isn't it already an actual `ParsedURL` type?) Another datatype has three fields "name", "username", and "id", all strings, with no explanation of their purpose or differences.

This means that I either have to track down the teammate that wrote the code and ask them every single time I run into something like this. Or I have to execute the whole program and reverse engineer the answer. Both are colossal wastes of time compared to a few "useless" or "repetitive" comments.


I had a teammate who would write something like the below and it drove me crazy. I assume this is closer to what the article meant. The below comment does not add anything to my understanding of what the program does.

//Assign 5 to the val called a

val a = 5;


Sure; that's a classic useless comment. Counterpoint would be a useful doc, where did the 5 come from?

    // Chosen by fair die roll. Completely random
    const kEschatonThreshhold = 5;

    val a = kEschatonThreshhold;


Absolutely, that is a great comment.


I'm not arguing that good comments aren't useful. Of course even the best comment can later be invalidated by someone carelessly changing the code and failing to update that comment. However, that doesn't make pointless repetitive comments somehow better or more useful.

For me comments should concentrate on the why and maybe the how but mostly ignore the what. That should be obvious from the code.


You're basically making the case that well-designed code doesn't need comments like the ones you're asking for.


Sure, if the name, type, and context of a variable can completely encode all the details that I need to know when working on the code, then fine, we can skip the doc comments.

Do you think that's possible for every field in all, or even most, applications and languages? I do not. And so I don't think we should be allergic to a few wasted sentences of prose.


Of course it's not possible. And that's why the sane requirement for comments is to use them where something might not be obvious.


Sounds like we agree.


>A data field called "imageURL" that's a string -- is that a remote or local URL? Where did it come from/how much can I trust that it's well-formed?

Put that information in your types and (i) you don't need to write those comments and (ii) the compiler can help you check if you haven't made mistakes.


I said "why isn't this a `URL` instead of a string". I know we should "Put that information in your types". The code is written. I can't safely make that change until I know the invariants, and if the person who wrote the code had written a doc comment, I would know them.


> The code is [already] written[ and doesn't include that information in its types].

By that logic, the code is already written and doesn't include that information in its comments, either.

> if the person who wrote the code had written a doc comment, I would know [the invariants].

If you're going to ask that the person who wrote the code have written it differently, there's (in general, at least) no reason to ask for comments in preference to types.


Would you be satisfied with a comment like that? :)

string imageURL; // Url of an image


string imageURL; // string variable containing the Url of an image


> I'd rather my colleagues repeat themselves in their documentation comments than provide none at all.

To a point. Believe me, there are levels of repetition you would find less tolerable than a lack of documentation. Here's a real world example: we had a bunch of classes with lots of attributes. The getters and setters for those attributes were automatically generated (with a macro), because we had lots and lots of them. In the headers however, we wrote the prototypes in full, so the tools could find them more easily:

  Foo getFoo();
  Bar getBar();
  Baz getBaz();

  void setFoo(const Foo &foo);
  void setBar(const Bar &bar);
  void setBaz(const Baz &baz);
So far so good. Now documentation is an important thing. So important in fact that we had to document every single method in all classes. And since we were using Doxygen, we were a bit constrained.

  Foo getFoo(); ///< Get foo
  Bar getBar(); ///< Get bar
  Baz getBaz(); ///< Get baz

  void setFoo(const Foo &foo); ///< Set foo
  void setBar(const Bar &bar); ///< Set bar
  void setBaz(const Baz &baz); ///< Set baz
OK that's redundant, not too bad, and QA is happy now. Well, no. First, we are supposed to be using Javadoc style documentation. Second, we need to document every arguments, and the return values. How are we supposed to understand a getter if we don't document its return value?

  /**
   * Get foo
   *
   * @return foo
   */
  Foo getFoo();

  /**
   * Get bar
   *
   * @return bar
   */
  Bar getBar();

  /**
   * Get baz
   *
   * @return baz
   */
  Baz getBaz();

  /**
   * Set foo
   *
   * @foo The new Foo
   */
  void setFoo(const Foo &foo);

  /**
   * Set bar
   *
   * @bar The new Bar
   */
  void setBar(const Bar &bar);

  /**
   * Set baz
   *
   * @baz The new Baz
   */
  void setBaz(const Baz &baz);

And so on, often for 20 attributes instead of just 3. I am not even kidding. Now we could say it's just a matter of space, but it isn't: sometimes, there was an attribute that was a bit special, and the comment reflected that:

  /**
   * Set wiz
   *
   * @baz The new Wiz. Must not be Merlin.
   */
  void setWiz(const Wiz &wiz);
Except the additional comment could as well be absent, because it was totally lost in the noise. I've missed several cogent pieces of information that way.

Worse, the tech lead, who acknowledged that this practice was useless at best, and had the clout to have this restriction relaxed, refused to do so. To this day I'm not sure why. I have a couple hypotheses (plain inertia, don't fix what's not blatantly broken, don't want to argue with QA, afraid of regulation…), but nothing solid.

If I had to chose between no comment at all and that, I'm not sure I would chose that. Sure, I would lose valuable comments, but if I already lost them because of sheer noise, I didn't lose much at all.


Fair, and I definitely get what you're saying, but I would still argue that for (Java)doc-style comments that one useful one outweighs the cost of the repetition.

And the reason is that you're not always just reading the header. When I'm looking at a `setWiz` call site, my IDE can pop up just that one docstring and save me a bunch of aggravation.


> I would still argue that for (Java)doc-style comments that one useful one outweighs the cost of the repetition.

You have to actually notice it for it to matter. A missed comment is no better than an unwritten comment.

> When I'm looking at a `setWiz` call site, my IDE can pop up just that one docstring and save me a bunch of aggravation.

Didn't know about that. My IDE never did that (I used Visual Studio at some point, and now QtCreator).


  //Insert new customer into database
  _db.Insert(newCustomer);
The number of times I've had to point out how incredibly pointless comments like this are in code reviews is far too high. As is the number of times I've had to point it out to the same developer.


Worse is

    //Insert new customer into database
    _db.Insert(oldCustomer);
As a reaction to this, I used to work somewhere that had a zero-comment policy. If you wanted to describe something you had to do it in the function and variable names. Overreaction in the other direction.


Absolutely agree. I had a similar experience. 30-character camel case names are no substitute for a well-written comment. What I've found works well is to view a comment as the topic sentence of a paragraph. For each section of code that can be reasoned about as a unit, set it off with white space above and below as paragraphs are set off in text, and give it a topic sentence explaining the intention. If there are difficult, delicate, or subtle aspects of the code, be kind to your reader and explain them.


I get that, but if there is a comment above each portion of the code with a short description of what they are attempting to do, then I don't mind that some of them are trivial. You can read the (hopefully syntax-highlighted) comments as a list of steps in the function.

Also, maybe that was a large block of code that was replaced later with a function call. Or maybe the code used to validate things about the customer used to be there before the function call and now that's all been wrapped into the called function.


> Programmers who don’t code in their spare time for fun will never become as good as those that do.

I would guess that the reality is more nuanced. Programmers who don't code in their spare time will probably find their knowledge becoming out of date. But that's because our profession doesn't really have a concept equivalent to continuing (Medical | Legal | etc) requirements, so we aren't really given an opportunity to sharpen our saw outside of our free time.

But that's only if you're consciously doing it to learn new things. My guess is that spare time programming for the sake of spare time programming actually has a deleterious effect. Though I admit I base this on conjecture and over-inference into scanty evidence. There are some anecdotal observations that colleagues who spend their evenings and weekends grinding away on code have a tendency to also be the ones who have trouble concentrating or make burnt out fuzzy brain mistakes at work. And there's some empirical evidence from other domains such as language learners and musicians that, past a certain point, additional time spent practicing actually does more harm than good. Brains develop fatigue, too.


I actually think in general, if somebody is so into coding they can't help doing it in their spare time as a hobby, they probably will reach a higher level than people who aren't driven in the same way. Programming is very much a skill which is sensitive to practice, so I think more hours on the keyboard is a pretty good way to get better at programming.

However I think programming "as a hobby" is a poor metric for judging who is going to be the best programmer, in terms of a hiring process, for two reasons:

1. It's already well-known that many companies will favor people who program outside of work during a hiring process, so many people will try to appear as if they are the kind of person who lives and breathes coding, and cultivating that image is much different than actually getting practice.

2. Raw programming skill is not always the best indicator of how good a contributor to a project someone will be. Sure you have to be competent at coding, but beyond a certain level, assuming the project is not particularly technically challenging like HPC or computer graphics or something, things like communication skills, being able to write good documentation, and being able to produce in a consistent and predictable way play as much a part in the team's success as being a wizard on the keyboard.

And I say that as someone who does spend a lot of my free time coding.


I used to have programming as a hobby but now it's been my job for over a decade and I don't really spend any of my free time on it. Maybe I'm lucky with the positions I pick but I still feel like I'm learning plenty at the job. If I don't I usually get bored and start looking for something else.


It was like pulling my own teeth yesterday to make myself work on the Angular app at my job. After work I couldn't wait to work on Advent of Code, so I did some problems from previous years until the new problem was available. Then, when I couldn't sleep, I installed an IDE on my phone and did some more in bed.

I can think of a lot of reasons why. The AoC problems have a bite-sized satisfaction that a messy real world app may lack, for one thing. But mostly I think it's yet another symptom of procrastination.


Absolutely. Also, to your 2nd point: in most business projects, soft skills are what you use to figure out what to build. If you're weak on that side, then being a blazingly efficient programmer means you're likely to just be really good at moving very quickly in the wrong direction.

I've found that the rockstar coders I work with often produce a very high volume of output that is technically excellent but doesn't actually contribute much to the company's profits.

I suspect that that kind of work output is currently overvalued by the prevailing business climate. FAANG companies can afford a "throw a bunch of stuff at the wall and see what sticks" approach, and venture capitalism pulls that up to a meta level. My own bias is that I've never worked under either regime, so I'm used to seeing a different mix of strengths be more valuable.


"Print statements are a valid way to debug code."

In 30+ years of programming, vast majority of bugs I've fixed were diagnosed with a few strategically placed print statements.

"I just print values. When I’m developing a program I do tremendous amount of printing. And by the time I take out, or comment out the prints it really is pretty solid. I rarely have to go back." — Ken Thompson in the book Coders at Work by Peter Seibel


I call this the 'printf' school of debugging.

It works a lot of the time (but not always, not great for debugging locks on stdout, for example). If you only have time to learn one debugging technique, this is the one to learn.


Print statements can generally be thought of as unit tests that require 1 line of boostrapping code.


My controversial opinions:

- Good abstractions are hardware instructions on layers of steroids. Bad abstractions are tabula rasa models that don't acknowledge the nature of the operations that the hardware is doing.

- "var id: type" syntax is both ugly and impractical.

- "inconsequential syntax details" like my last opinion are actually important aspects of a programming language and not just bike-shedding.

- If a programming language includes a package manager that points to a repository by default, then it's a not a serious programming language.

- Drivers don't belong in the kernel. The kernel should just define an interface for you/manufacturers to write drivers for.


> - If a programming language includes a package manager that points to a repository by default, then it's a not a serious programming language.

I'm not sure what you mean, having a default repository seems like a sane decision to me. Do you have a language in mind that doesn't do this?


You have to install package management tools and point to repos for most languages - Erlang, Java, PHP, Lua, etc.

Very few languages include a package manager as part of the LANGUAGE.


I’ll add a controversial opinion, specific to programming an application:

Premature Abstraction is the root of all evil.

Don’t panic! The code will be a mess. Let is be a mess. Don’t abstract too early, or the eventual mess that regenerates will be even worse than without an abstraction.

If a part of your application or assumptions in a piece of code haven’t changed for 12 months, then abstract that part of your application. However, only abstract the bare minimum.


> “Googling it” is okay!

I wrote a book called “Relevant Search” and created one of the most popular Elasticsearch plugins (Elasticsearch Learning to rank). I’ve worked in Solr, Lucene, and Elasticsearch for more than 8 years more or less constantly.

I have, at any given time, dozens of google tabs open dedicated to Solr, Lucene, Elasticsearch. Sometimes on basic things to jog my mind.

Knowing how to interpret information online, what exactly to search for, and having memory to be jogged by other context is just part of what it means to have a skill.

Memorizing documentation is rarely a good use of brain cells IMO, even pre google when instead you used a book index/ToC to do the same thing.


The sign of a junior developer: constantly injecting some new learned aspect into whatever they are focusing on the moment rendering your project a tapestry of broken shards of glass.


I think that is called a mosaic.


> UML diagrams are highly overrated.

Personally I think they're plain dumb. Not saying I can't hide my ego and move on, just that I think they're a massive waste of time on a leaky abstraction rooted in organizations that value talking about things more than doing them.


UML class diagrams are overrated but they're not the only UML diagrams.

UML sequence diagrams are pure gold.

I have no idea how you can accomplish anything remotely complex without extensively talking about it (and writing it down!) first. It works fine for 2-months projects, but not for 2-years projects.


I disagree they're consultant gold :-)


> 1. Programmers who don’t code in their spare time for fun will never become as good as those that do.

Not surprised that this would be controversial.

> 17. Software development is just a job.

A bit disappointed that #17 is controversial in principle, though it is certainly a hobby for some and a religion/obsession for others. Maybe what I'm looking for is "Software development can be just a job and that's OK - it doesn't have to be an all-consuming passion that devours all of your waking hours, even if your employer or boss thinks it should be. People with varied interests can be better developers than people who have few interests outside of software development or computing technology. Most software applications depend on expertise outside of software development, often knowledge in a specific application domain (such as science, mathematics, engineering, game design, business, or communication) as well as expertise in computer-human interaction or sociotechnical design."

Hopefully that wouldn't be particularly controversial, but it probably would be. ;-p


I think #17 says that software development can be just a job, don't forget to also have a life, don't sacrifice everything just for the job. But at the same time, if it's "just a job" and you find yourself annoyed going to work, doing it at the lowest acceptable quality level, never being proud of some elegant solution, then you'll probably never excel at it and it will be a nuisance for you and your teammates.

So those are two different meanings of "just a job".


> most comments in code are in fact a pernicious form of code duplication. I've found of the most frustrating things at my current job to be devs writing complex JS w/ 0 comments. Takes so long to understand wtf they're doing. I'd rather be on the other end of the spectrum. Ofc the middle path is best (minimal comments, when needed).


> Your job is to put yourself out of work [...] any software that you create is to be written in such a way that it can be picked up by any developer and understood with a minimal amount of effort.

This is a potentially dangerous one, my opinion is (when misinterpreted) this notion will make your product as disposable as your employees.

While the examples given to achieve this are not themselves bad, in the wrong hands the notion of ensuring minimal effort is required to understand any code produced can be used to enforce mediocrity, to make difficult writing good code by reducing everything to the lowest common denominator out of fear of something being too difficult for everyone to understand... within reason, it's important to embrace and exploit qualities not shared by all employees - there is a risk in certain employees becoming difficult to replace, but it's better than the guarantee that nothing of value will be produced.


Already this is essentially the ethos of large software companies like Google, et al. An employee is a cog that can be moved to any project or replaced/discarded as needed. [Exceptions are of course made for a handful of people they actually do value for one reason or another]. Software is treated the same if it's not a primary revenue source (even if it is profitable).

Heck, if you want a controversial opinion then arguably "disposable engineers" is an openly stated goal of programming languages like Go. [I'm not sure I'd entirely agree with this opinion but I wouldn't entirely disagree either].


You're not wrong... I think the irony in this is that it likely produces very large codebases in an attempt to maximize legibility - in such a way that devs new to the code will spend an equal amount of time traversing it and trying to find things as they would have trying to understand a more minimal but less boring codebase while also potentially improving their abilities. In other words it's lose lose for all.


> Unit testing won’t help you write good code.

Hard disagree. Unit tests themselves, yes, are really only great at regression testing.

Writing code that is unit-testable helps you write excellent code.


I disagree. I think often times "making things testable" leads to code which is much harder to follow. For instance, dependency injection often leads to things depending on abstract interfaces rather than concrete implementations, and it becomes much harder to trace through code.

I actually think integration tests are vastly more useful than unit tests, and in general unit tests should be employed: a) if you are testing business logic which is not obvious, b) as a method of debugging or c) when it makes it faster to iterate on a particular piece of code during development.

I think excitement about unit-testing has become over-generalized to domains where it doesn't make sense. For instance, in a mobile application unit testing your networking layer does not add value 99% of the time, will be way less helpful than hands-on testing of the UX, and just adds busy work to modifying the code-base because you have to maintain the tests as well.


> depending on abstract interfaces rather than concrete implementations

That's definitely a problem... in Java. I've had to deal with it before (in Java), and I think part of the problem is that people start using that abstraction pattern before there's actually multiple potential concrete implementations. In Ruby, I've applied something like this pattern, except all I'm doing is wrapping the service library to make my own DSL, and it's working tremendously well and allowing for excellent testing.

And yes. Anything can be overapplied / mis-applied, and sure, unit testing might be more susceptible / abused in these ways.

Essentially, I unit-test my functional code (libraries, etc) and I integration test my OO code (actual features, as the whole feature). It's not only a really easy way to test each kind of thing, but enforcing the functional library / OO features has led to (IMO) practically perfect code :D


> people start using that abstraction pattern before there's actually multiple potential concrete implementations

But that's what I mean: the second concrete implementation is the mock which has been created for unit testing, and this is how unit testing introduces complexity and indirection.


Hmm.

So what I'm doing is that I have a wrapper object with a private member variable that's the client from the service's API library (for this example, it's DialogueFlow). My wrapper then has a function for each basic thing that my code actually wants to do with the library, translating from the plain data objects (or straight arguments) that're in use in the rest of my codebase into the nested object mess that DF talks in (and back again, when there's return values).

Then in tests, I just monkeypatch the private `client` method/variable to return/be the mock. The mock then replicates only the functionality that I actually use from the service's library, which is easy to do since there's only one file to look at for all usage.

I now have my own DSL for interacting with the service, and I understand the service well enough to fake it, which has been super helpful. So far I'm loving it, and it doesn't feel at all like the "where's the !#$%! bean" hell of Spring.


This is a place where Go's duck typing really shines. You can create a mock class without having to modify the actual code.


A good controversial one:

Scrum turns great developers in average developers.


The corollary: it also turns "bad" developers into average developers.


That's not a corollary.


> 6. Not all programmers are created equal [...] In actual fact, the performance of one developer can be 10x or even 100x that of another.

The author points this out as controversial, possibly because it might make people feed bad. I think it’s just wrong. Measuring programmers by output means you’re ranking your team based on who’s doing the most in that context. That includes what opportunities they’ve been provided, their areas of expertise, the level of autonomy they’re given, etc. Getting people to reach their full potential is hard work. Sure, there are developers who require little to no effort to be high performers. Chances are it’s because of the opportunities they were given in the past to get there.


I'll venture to say 100xers don't exist.

If your in a team which helps you succeed you'll perform better.

I've been very good at leaving jobs where I don't feel like I'm a good fit. On my previous team I spent most of time helping junior developers. This was extremely rewarding on a personal level. While I wasn't writing 500 loc a day, this was impressive enough to net me a promotion and a 60k raise after about a year .

Thinking that your simply the greatest programer that ever was doesn't mean you'll work well on a team .


>I'll venture to say 100xers don't exist.

Maybe this is just a semantic argument, but there are definitely 1/100xers out there. Even worse, developers so bad that they are actually net negatives. They take up more of the senior's time through bug fixing and mentoring than it would have taken the senior to just write the code themselves.


Give a 100xer a boss who berates him for not writing enough comments and s/he is no longer a 100xer. If anything I've seen more people get in trouble just for being jerks, you can do very well as a bad developer you just can't be a mean person


Your claim is it is impossible for there to be a 10x difference in "full potential" and all apparent differences are because the "full potential" has not been unlocked?


No, his claim is that measuring two developers and saying one is 10x more productive than the other doesn't mean that the first is 10x better, just that they are 10x better at what you are measuring under the terms you are measuring, nothing else.


OK. So there's this concern of measuring of output, which I think is very fair as it is usually very biased, companies are not good at attribution.

None of that seems to imply that the opinion is "wrong" though, it seems more like an orthogonal concern.


There can definitely be a 10x difference in “full potential”. I think it’s super rare, and our ability to assess full potential is massively overshadowed by other factors.


So explain again how

> 6. Not all programmers are created equal [...] In actual fact, the performance of one developer can be 10x or even 100x that of another.

is "wrong" given that

> There can definitely be a 10x difference in “full potential”.


I like maccard’s response better than mine:

> measuring two developers and saying one is 10x more productive than the other doesn't mean that the first is 10x better, just that they are 10x better at what you are measuring under the terms you are measuring.

If you’re interpreting the author’s statement as an assessment of full potential, then you may be right. I didn’t read it that way, and honestly I don’t think that assessment can be easily made.


I see no controversy here.

Anyway here's mine: readability is subjective and readable code must allow for exceptions in styling rules.


See, I think code styling is a complete waste of time. And "readability" doesn't actually refer to how you format the code, it refers to how you design and structure your code.

We have automated tools to format our code. They work great. They end arguments and take some work off your plate. Personally, I have better things to do than fiddle around where my curly-braces are placed.


> See, I think code styling is a complete waste of time

For manual code styling, I agree.


> I fail to understand why people think that Java is absolutely the best “first” programming language to be taught in universities.

Because it’s the language used for the AP test. I hate it as well that students know it first.


But it is still the case? From my experencie most CS universities starts with Python nowadays.


My college CS education began with OCaml, I kid you not.


That actually sounds pretty good.

When I went, our first classes used C++, but a couple years later they switched to beginning with Scheme. (IIRC it was PLT Scheme, which is now Racket.)


lucky you :)


> If you only know one language, no matter how well you know it, you’re not a great programmer.

Totally agree

Here are my journey with programming languages with the learnings

1. Java - OOP, JVM, benefits of statically typed language

2. Python - Simplicity. Java's verbosity kills productivity. If used properly, dynamically-typed language boosts developer productivity

3. Java/TypeScript - Asynchronous programming with promises. It was mind blowing.

4. GoLang - Parallel programming with goroutines. Explicit errors as values reduces windows of having runtime errors.


Software development being so opinion based is really irksome if you ask me. Code reviews often boil down to battles over opinions, people get caught up in bikeshedding a lot, and "famous" developers on Twitter can see their opinions practically become standards. I've also had managers consider things like TDD as absolute gospel.

One of my favorite tools in the JS world is Prettier, mostly because it reduces the opinion surface. I want more tools and practices like that.


> Software development being so opinion based is really irksome

> One of my favorite tools in the JS world is Prettier,

I have no idea how these two statements can exist in the same comment. They're basically polar opposites to me. Prettier is one of the most obnoxiously, over-bearingly, self-indulgent opinionated pieces of code I've ever tried to work with. And they even say it themselves, with pride: opinionated.


Because the opinion boils down to "use Prettier" or "don't use Prettier". If you have a team that can agree to that, then all the arguing over formatting completely disappears. You're trading trying to find some arduous compromise across the entire team with "whatever Prettier does". In my experience people are at first disappointed they don't get exactly the formatting they want, but once everyone gets used to it and formatting discussions completely stop, it's a nice relief.


Please tell me how that isn't equivalent to "adhering to <famous developer on Twitter>'s gospel" vs. "not adhering to <famous developer on Twitter>'s gospel."


Because the point isn't "Prettier's formatting is the best", it's "let's nip the argument about formatting in the bud so we can focus on much more important things". I couldn't care less who made Prettier, many tools in this space would likely accomplish this goal.


> many tools in this space would likely accomplish this goal

Exactly, so what made your team choose Prettier versus the other tools if not the fact Prettier is the trendier choice? If some of these other tools becomes the tool du jour, what's the guarantee that people won't start arguing that you should all move over to it? You only see it as nipping the argument in the bud while a single solution holds the majority mindshare on what to do with the argument.


You seem to think I am stating Prettier removes all arguments and opinions. I said it "reduces the opinion surface". If some other formatter becomes popular, debating between it and Prettier is still way preferable over arguing over single versus double quote? Space around params? Opening brace at end of line or new line? Should if statements always have brackets? Should else statements start on a new line? Template strings or concatenation? Single or multi var? Let or const? Line up variable values on the same column? How many blank lines between functions? How long should a line be? How to format ternaries?

I could go on and on and on. And on some teams I've been on, people have. I just roll my eyes at all of this. None of it matters. At all!

Prettier takes 10 seconds to set up and our prettierrc file typically has one or two settings at most, often we don't stray from the defaults at all. It's totally painless. It'd take a very compelling tool to compete with that. If one comes along, sure, whatever. The goal is to minimize bikeshedding. Truly eliminating it is a tall order.


You were doing fine until you mentioned JS


> Unit testing won’t help you write good code.

This is factual and irrelevant. Writing good software is more than just writing "good code". Unit tests help reproducibility. They help orient contributors, too, as failing tests give clues and passing tests give comfort. Plus, if you have automation like Dependabot updating your libs, you want a basic sense of whether the fully automatic updates are sane from looking at the test results.


2. Unit testing won’t help you write good code.

This is idiotic. First of all, yes, as mentioned in the quote, tests will make sure that the code doesn't break if some rando changes it. But also, code that can be correctly unit tested is usually good, well-organized code. The simple act of writing code that can be properly unit tested is a significant step towards writing good code.


"The only reason to have Unit tests is to make sure that code that already works doesn’t break."

I used to think this -- now I believe they have another almost as important purpose as a sort of "check your work" step. Not necessary for every "unit," but useful when a given class/method has high (irreducible) complexity.


A lot of it boils down to another entry on that list

> Use your brain

IE, here is your confidence in our code (0...100%) and here are tools to increase that confidence (unit testing, integration testing, monitoring, etc.). At that point it is straightforward to say, "does this unit test increase my confidence?" and then go onwards. The thing is you need to learn to accurately gauge your confidence which can be difficult at times, especially when you are dealing with unknown unknowns, but if you iterate on it you get to a place where problems pop up in the areas you are least comfortable with and that is a good signal you're at least approximating it correctly.


> The only “best practice” you should be using all the time is “Use Your Brain”.

> It’s OK to write garbage code once in a while.

I feel like these two go together. I think something which is often forgotten in programming discourse is that the single most important thing which code can do is create real-world value. A lot of times we get caught up in coding the "right way" because we as programmers look at code all day, but the truth is nobody else is going to see the code once it's shipped, and it doesn't matter one bit to the user if it's beautifully designed code or a mountain of kludges as long as it performs the same.

Sure good coding practices are important, but I actually think that an expedient solution now is often better than a perfect solution a week from now, and how well-factored particular piece of code is should be a function of how many times it should be touched rather than an absolute standard.


  > what’s your most controversial programming opinion?
- Unit tests are a poor man's compiler (or a decent type system will decimate the number of unit tests).

- Python is a toy. If you're developing a product there's always a better option.


I can easily think of companies that built multi-billion dollar products on top of Python codebases. Instagram, Pinterest, YouTube, to name just 3.

I can't think the same for Haskell, which in my controversial opinion makes Haskell more of a toy language than Python.


How would you implement and test something something like C++'s unordered_set, a set with constant insert and lookup, using type-level constraints instead of unit tests? How would you represent this with types?


> How would you implement and test something something like C++'s unordered_set, a set with constant insert and lookup, using type-level constraints instead of unit tests?

How do you test asymptotic behavior with unit tests?

It would seem that is the kind of thing which is analytically demonstrable from the input types knowledge of the same features of the underlying operations (and, thus, theoretically within reach of a type system, if you wanted to design a type system with that in mind), but only subject to loose approximation in anything like unit testing.


Testing asymptotic behaviour is quite simple. It can be done empirically.

Yes, you might make a formal proof of asymptotic behaviour and the property that the iterator returns a set, given a certain approximate model of the system, of course. However, it doesn’t just become simple or practical given the existence of a suitable type system alone.

Also I am not sure of the usefulness of tests of the asymptotic behaviour of an implementation of a data structure over tests over the applicable data and system. Is there a demand for this? What I mean is separate from the proof that a theoretical hash set had X complexity in its operations. That is clearly interesting.

Unit testing might be for the poor man, but how much money will the man doing the formal proof have left at the end of it?


> a decent type system will decimate the number of unit tests

Sure, but reducing the requirement for unit tests by 10% still requires the vast majority of the tests that would otherwise be required.


(2012)

Many of these still stand - but that's presumably because they aren't really controversial - as indicated by the popularity by vote.

I think it's odd that the submitters of most of the opinions considered them controversial tbh.


> I think too many jump onto the XML bandwagon before using their brains

That one didn't age well; and it was only eight years ago! Unless you replace it with JSON, of course.


Ha. Even JSON outside of dynamic languages is actually not all that pleasant to work with. In JS though - brilliant.


Well, it said controversial, and it didn't disappoint! I was surprised to see the one about getters and setters: "they make them private and provide getters and setters for all of them." I had begun to think I was the only person who felt this way (now I think he and I are the only ones). Getters and setters (and "datatypes" in general) miss the point of object oriented programming.


Not only do getters and setters miss the point of OO programming, but (controversial opinion) they utterly kill it dead in Java.

Every. single. Java. object. now has to have them. The average Java data class is a tiny stub at the top of the file that contains the class name and the data structure definition, followed by a parasitical mound of getters, setters and builders. This unsightly growth of boilerplate ends up being regenerated by the IDE tooling every time it changes anyway. I feel like I'm working with C, except that the code I'm editing has already been through the macroprocessor.

Any needles of "smart" logic are drowned out in the haystack of dumb accessors.

That same haystack, coupled with auto-regeneration, makes it very easy to clobber any "smart" accessors, so nobody dares to be clever. It's always logic on the outside of the class. There is a thing do'er that does something to the class, but the class has no idea how to do the thing with its own data by itself.

This is the antithesis of the very ideas that OOP promotes. Whether they are a good idea is debatable, but the fundamental conflict cannot be ignored.


Thankfully, plugins like Lombok partially solve this issue - while we still have unnecessary accessors, atleast they don't pollute the source code and make it hard to read and navigate.


This was a fun game to play:

Seven I fully agree with.

Seven I agree with but I don't think they're controversial.

Two I agree with but they're interesting in that I'm pretty sure that the ideals of them are not controversial, even if they're not as common in practice.

One I agree with the title, but not the description.

One might have been controversial back then, but it might as well be an axiom nowadays.

Two I fully disagree with.


> If you only know one language, no matter how well you know it, you’re not a great programmer.

Well, is this unpopular?

If you only know a single language, and nothing else, I would think you are just good at reminding stuff you read, like guides and examples, and not that you really know what you are doing.

Although I can understand when people don't know other paradigms.


I can't believe #18, developers should be able to write code and the summary saying that he got some/many? candidates who couldn't. Its not that developers can't code, its that recruiters paid on commission will send any warm body to an interview.


I don't see any controversy here.. all of the options are true, just not 100% of the time.


_> Unit testing won’t help you write good code_

How about: Unit testing can help to write "correct code".

When I read through this list rewording in such a way could be less controversial and opinionated.



If you're not at minimum duplicating your code, you're not writing resilient code.

Every piece of logic should be written at least twice and verified against each other.


What's controversial there?


Good point.


My favorite opinion is comments muckup the source code.


wagslane (or dang if you somehow stumble on this comment), please add (2012) on the title. Thank you.


I got you


#18 stands out to me. The question the guy is asking is a straight up math problem. I don't know about the average person here, but who often needs to calculate the area of a circle of determine factorials on a daily basis of coding? I can understand if you are in a field with the actual sciences or in gaming regarding physics. But for those of us who just manipulate data all day, why would you ask that question? We know how to program, not to do math. Sure if you gave us time to learn about how to do that kind of math we could figure it out, but on the spot asking "Do this math problem but in code" is like telling a construction laborer at an interview "What would you estimate is the proper weight of concrete laid upon a steel beam 200ft long?"


Those questions are perfectly reasonable sanity-check tests of basic programming skills (functions, loops, basic arithmetic). That person is not assuming you know how to calculate Pi -- they're telling you how to do it as part of the question.


How can someone be expected to know how to code something if they don't understand the basic concept related to the question? The only time I've ever seen an equation like that was in Trig or Discreet math. Both of which were college level courses. None of which were in basic loops or arithmetic I've learned in actual introductory classes. I cannot calculate the area of a circle without a Unit Circle. I'm sorry but that's just incredibly unfair to assume people just remember all the Trig they did like it was yesterday. Trig's easy, but who uses it on a daily basis? Machinists and Architects probably use it far more than we'll ever use it.


The question in the article doesn’t use trig. You only need a bit of an idea of how loops work and be able to recognize patterns.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: