Write 5 tests for a piece of code that you think is working but has no tests. You will find at least 1 bug every time.
Staying at work longer won't help you produce more and better code. Sleep and exercise will.
People who get angry about technical choices like what framework to use or what coding style or how tightly to enforce rules will flame out. Don't be that person.
If you think you need to rewrite it from scratch, think again. Look up Chesterton's Fence.
"Chesterton's fence is the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood."
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."[1]
Chesterton's Fence is a great example of why code comments are critical.
If the code is doing something that isn't immediately obvious to a programmer on a deadline, there should be comments explaining why it's there.
"Go away and think" just means the programmer who wrote it couldn't be bothered to take a couple of minutes to be clear, resulting in later programmers wasting hours or days (or worse) re-discovering the reason for something in the first place.
A counterpoint to this is that the very reason to refactor code usually is that it isn't understandable anymore. I'd rather read it in the way that you shouldn't rewrite code for which you don't understand the problem it's trying to solve.
In my experience, that part is usually much easier, since you're working in hindsight.
There's a great interview Peter Seibel did with Bernie Cosell[1], where Cosell explains that his approach to debugging usually consisted in
a) understanding what a specific piece of code is supposed to do and
b) rewriting it so it actually would do it properly.
Of course there's much to be said on when this approach is overkill and when it isn't.
> "Go away and think" just means the programmer who wrote it couldn't be bothered to take a couple of minutes to be clear
I haven't had the luxury of working with a lot of experienced veterans, but I've seen and been burnt by rookies who couldn't be bothered to read code other people wrote or written without their favorite framework/language. No amount of comments would deflect those types.
Chesterton's Fence is a brilliant insight. Funnily enough, I learned it reading some interview in a fashion magazine. They talked about what made different designers styles is the fact that they didn't break fashion 'rules' haphazardly, rather that people should only break 'rules' they knew the purpose for and why they wanted to do so.
I think that 5. and 10. are not as good advice as they sound. Not many people are good teachers and if you try to learn with other, they will oftentimes attempt to do all the work, especially thinking part, for you. If they already know more, there is no way for you to match their speed and you end up just doing what they dictate - worst possible way to learn. If you want to become good, you need to spend time learning alone, when you can try out wrong path for a few minutes, solve problems independently even if it is slower, make mistakes and finding out by yourself.
It is different if people around you have good social skills on top of tech skills, but many people are not like that. A contact with community is important for motivation and having possibility to work with good quality people is indispensable. However, much of important learning happens when they are not in the room and they can not take control of keyboard and tell you solutions to problems before you had the chance to think.
"9. Trying to understand everything is a lost cause. In the beginning, I tried to chase down the “why” to every problem I encountered. This isn’t necessary."
Nonsense. That is literally the worst advice you could give to an aspiring programmer. It is precisely necessary to get to the bottom of your programming problems and bugs, that's the stuff that increases your knowledge and lets you understand systems deeply. Be curious! Track things down all the way to the OS and the hardware! Recently, I just found an OS bug that negatively affected the realtime performance of our controllers. Dig deep! Never write code based on ignorance!
Since I see no mention on wages (maybe OT) I will chime in:
Value your work, even if you think you are "still learning". I spent way too much time on under-market salaries writing good code. I wish someone had told me I could be earning 5x whilst still at uni.
Correct. I thought that somehow getting that degree suddenly qualified someone as a "real engineer". You're better once you've finnished, but your capability is still on a continuum starting prior to your first class.
Don't drop critical thinking and "hammock time" in favor of Agile. Some problems require a bit of personal reflection, without it you can easily end up building something atrocious, even while following all the "best practices". I spend a couple of years at a SCRUM shop and all we did was "finishing stories" without any regard for architecture. The technical debt was truly stunning. These days I easily spend a day just thinking about the problem before writing some code, which then still happens in a REPL-driven style to allow for experimentation and feedback. Best of both worlds I think. Also, most programming is actually fairly easy. Focus on the (business) problem you want to tackle instead of the code. Code is ephemeral, the only thing that matters is the business value and the user experience for getting there.
side effects are bad
code should follow data
it won't do what you intend until you've verified it
solve the problems you have, not the ones you anticipate you will have
Side effects from functions that modify global state (or importing global state elsewhere) are bad because they increase the complexity of a program in a way that can very rapidly spiral out of control. The operation of a function reading global state is no longer easily predictable and you'll be tracking hard to replicate bugs.
Better to have a function rely entirely on its arguments and to output the result as a return value.
Side effects that do things such as persisting data or producing output are - for the most part - acceptable, if they are contained in such a way that you move to as much of a 'functional' approach as soon as you can.
So reserve your side effects for the edge of your program, keep the rest as side-effect free as you can.
Now that was a little long for a bullet point list hence the abbreviated version.
Although I had done some programming before, I always remember the first lecture of the first year of the CS course I did where the lecturer basically concentrated on the importance of KISS (i.e. Keep It Simple Stupid). He did say that we'd probably not believe him, and of course we didn't as we were all desperate to prove how awfully clever we were. He did also say that if we stuck at things long enough we'd probably come to share his view.
How I wished I'd manage to believe that bit of advice a bit earlier in my career and saved a few projects from my zealous architectural astronautics. Well, at least that's one lesson I did eventually learn.
Can't agree with this one enough. I'm currently battling through a project full of people that are too smart for their own good and too stupid to realise it.
I had a huge argument today with someone that wanted to wrap a certain type of higher order function ("foos") in another higher order function with a particular name ("fooGenerator()") for the sole purpose of differentiating it from a different kind of higher order function in the same file ("bars").
I lost the argument. It turns out using heading comments to seperate groups of similar functions, or god forbid splitting up the 400 line file into multiple smaller ones, is too confusing and prone to errors by maintainers ("What if they put a new foo function in the section for bar functions!")
I am learning that sometimes people try to defend their ideas by coming up with ways to make their idea work. If accommodating the solution is more work than actually implementing it, it might not be the right solution.
That unless you are in a pure code monkey position, programming is at most, 30% of the job and there is an array of skills that have nothing to do with programming that when mastered, will give the impression that you are an expert programmer.
I have some idea what you're hinting on but no entirely. Im in my 3rd year in and have seen a lot of bullshiting. Should I just learn to start bullshiting?
Is everything other than programming, "bullshitting"? Far from it. I am referring to a multifaceted spectrum of skills, no different than one every other non-technical professional has to contend with. Linguistic skills, logical fallacies (recognising other peoples' bullshit), arguing pro and contra, team dynamics, business knowledge (if you're in line of business software, which most of us will be in), testing (a science in itself)... and yes, the softer skills such as marketing yourself and trying to present your work in the best possible light.
I would also recommend learning how politics function in the workplace, if only so you can defend yourself from other peoples' black magic.
Another way to read this would be that you shouldn't just be the one to churn out lines of code according to a specific request.
More often than not, you'll be well advised to first get to understand the problem you're supposed to be solving and come up with a good solution rather than what you're told at first. The XY problem[1] is not limited to technical people, it will very likely also affect your boss or your customers.
Occasionally, there's even the counterintuitive case where you can handle more use cases by deleting code, which I personally find very rewarding :)
that's certainly part of it though as a freelancer this may refer to properly architecting before you write any code, managing your time, and negotiating with your stakeholder to keep the project sane/profitable
Exactly. Once you understand that coding is just a means to an end, you start looking at things more holistically. For a programmer, his code is the most important cog in the machine; but that is certainly not the case, marketing, user experience etc. are as important.
Yeah, IMO it's mostly about convincing non-technical people that, while they know what they want, you know what they need and are able to provide it. If you're not doing this, your boss is.
On #3: Agreed. In my experience, every time someone comes up with some grand design before writing any code, it's built on one or more assumptions about the problem. Then work begins, and one or more of the assumptions inevitably turn out to be incorrect and the whole design gets deflated.
Besides all the good technical advice already mentioned here I'd like to add some things I find very important, as I'm watching over the career progress of 2 junior developers.
Take VERY good care of your mental and physical health.
Do not overwork yourself. It's a marathon and you will overcome most challenges with time and discipline. You don't have to be a genius/prodigy.
Don't put too much pressure on yourself - and it's hard not to due to all the praising of highly successful programmers - but nobody is an overnight success.
How about:
- Its not about code, its about solving problems.
- Learn to cope with change. Programming constantly changes, make sure to deal with it and don't get your self stuck on one thing you can do. Pick the cherries out of every programming language and frameworks out there. You're a C# guy? Do some ruby, you ruby guy? Do some node.
- Don't give a shit about mainstream. Do what works for you. But keep an open mind and consider every option, and always try to investigate why stuff works a certain way.
- Dont freak out if you're lost in new things. Its normal, fork repos, join groups, irc channels get familiar with things.
- There is no end-game. If you think that you're going to be finished in a couple of years you're wrong, there is aways shit tons of new stuff to learn.
- Dont be a perfectionist, its gonna stall you and make you never deliver.
- Always deploy, refactoring shit is easier when you know what problem your code introduces, so don't optimize prematurely.
That there is a huge amount of marketing hype that comes out of silicon valley for languages, projects, etc. and for the most part you are wasting your time to get too deeply immersed in any one thing.
"Let go of your emotions" is very superficial and not really doable with the advice that is given.
We build emotions based on the truths we perceive. The only way to healthily change emotions is to change those truths. If you feel very attached to your work and flip out (fear, anger) if someone else finds an error in it - you need to change the believe that something bad will happen to you when you make mistakes. Nobody will hit you over the head * , nobody will laugh at you * , you will not lose your job or career * . You will not be worth less as a person. You made a mistake because your brain is built in a way that allows for errors, and so is every other human brain on this planet.
* Ok somebody might do these things to you. But you will cope, and other people will help you cope.
The true language of computing is mathematics, and its power and beauty are masked to those who cannot speak it. Programming languages are powerful tools for communicating ideas and encoding the language of mathematics. You can use these tools to shovel bits around without understanding the language, but you won't be able to see or say much unless you become fluent. Creative expression comes with fluency, and fluency requires the perspective unlocked through deep understanding. Becoming fluent in the language of math is the rite of passage one must cross to receive its true potential.
Logic more so than mathematics. Mathematics as a means to achieve efficiency, which is an afterthought most of the time. Most working programmers don't need more than a basic understanding of algebra.
Logic is a form a mathematics, and understanding logic is essential. Indeed, for many programmers, that's the extent of the math they know and use. And that's what limits them. There is so much more.
Unless you're solving mathematical problems, how does math directly translate to programming (ignoring efficiency)? I agree with the sentiment that the connection is over-stated. Whenever I hear this argument, it feels like reductive FP propaganda. I'm interested in FP, but not because "it's just math". Sometimes it's nice to piggyback off of mathematics, and it's true that a program is a sort of proof, but sometimes (most of the time) it's better to not be so formal, and not be so academic.
Also, math and logic are obviously related, but logic is not "a form of mathematics". If anything, mathematics is a form of logic.
Any recommendations of how to get at this as a self-taught programmer? I've picked up books on combinatorics and discrete math, but I still feel very far removed and don't know how to really take an interest and make the connection.
Also I've been posting a lot on math lately so if you want more examples and references, scroll through my HN posts and comments (I tend to cite everything so there's a ton of links in there).
Of special note, as others have started to discover, Grant Sanderson's 3Blue1Brown YouTube channel is a masterwork, an evolutionary tour de force in the way mathematical concepts and visualized and presented.
Grant wrote a custom software library (in Python, on Github) that enables him to create video visualizations to show the geometrical intuition behind abstract mathematical concepts. And this is key. Being able to visualize the geometric structure and transformations the mathematics is describing is the key that's been missing from the way math is traditionally taught, and this lack of visual intuition is why many find math so hard to penetrate.
Go there first to develop your visual intuition -- watch all his videos, and then watch them again, and again -- and soon you'll discover your new, powerful mental models forming, and things that were once opaque will suddenly become clear.
Start with his "Essense of Linear Algebra" series (it's one people are usually blown away by, and it's being discussed on HN a bunch lately).
He's going to release an "Elements of Calculus" series in April, and while Calculus is usually a prerequisite to Linear Algebra, it's not critical for understanding the visualizations the way it's presented here. But if you need a refresher, MIT's "Big Picture Calculus" is great (it's linked to in the HN thread I referenced in the first paragraph).
I can't thank you enough, this is really great. I've attempted a lot of videos etc. I really enjoy Shai Simonson's stuff, for example. And it helps me think more about things like time &/ complexity, but I've really struggled to find a "path" so to speak. I start losing them and without an application lose interest. So thank you again, I greatly appreciate it.
I remember Shai from back in the ArsDigita days. ArsDigita and OpenACS were one of the first open-source projects I was involved with. Shai was one of the instructors at ArsDigita University -- one of the first CS courses on the Internet. ADuni began as an experiment by Phil Greenspun to see if it was possible to teach non-CS college grads the MIT Course 6 curriculum, compressed into a one-year, accelerated postbaccalaureate CS program.
After the year ended, they released all of the video lectures online. This was back in the late 90s / early 2000s. Back then there was no YouTube, video compression sucked, and broadband was in its infancy so connections were painfully slow . To get the videos, you could try and download the massive video archive, but it took forever and I don't think I ever got the full thing. Finally at some point they started selling and shipping HDDs pre-loaded with the entire video archive.
Today we're fortunate to have a lot of high-quality course content available online. Greenspun was one of the pioneers who helped make that happen. We should all be thankful for his foresight and initiative to trailblaze that space.
Prior to ADuni, Phil put all the curriculum online for MIT 6.916 "Software Engineering for Web Applications". It was one of the first serious Web Engineering courses offered anywhere. Several other schools used the material to teach the course in their CS dept.
Problem Set 4 was my favorite -- design a Knowledge Management (KM) system using metadata, and then let the computer generate the server-side code for the application automatically.
The material was so new and innovative, it was the thing back in the day -- part of the foundational material programmers used as we were learning to build server-side apps in the dotcom era. The ArsDigita Community System (ACS) was one of the first big Web development toolkits, and it was built on top of AOLserver, a high-quality C server that had an embedded Tcl interpreter running inside. Greenspun is an MIT Lisp hacker, and Tcl is an embeddable language similar to Lisp. And running Tcl directly inside AOLserver was a key innovation because it made server-side scripts fast because it bypassed all the traditional CGI overhead, and it allowed you to make database calls from directly inside the server. All the architecture design choices were documented and explained in detail in "Philip and Alex's Guide to Web Publishing," which was required reading at the time.
The semester I did the self-study program, Phil had the MIT students take the final exam online via an app built on the ACS. The exam wasn't open to self-study students, but I wanted to take it for my own edification. I had become intimately familiar with the ACS code, and I found a way to to hack into the online system and took the exam under some guest account. I never found out how I did though -- I asked Phil about it one time when he came to Dallas, but he didn't remember. I can't believe that was almost 20 years ago.
Those ADuni lectures are actually what I know him from. They're excellent, and I very much admire his teaching. The problem I think is either that I went too far in the lectures without attempting the problem sets, or that I simply didn't have the educational prereqs or a work / hobby application to ground me and keep me working at it. Nonetheless, I very much admire the work and enjoy watching the lectures. If nothing else it gets me in a better headspace when I program. Hopefully I can transition that into learning the actual methods and techniques over time.
I will definitely check out the MIT course. I bet that would help me retain it much better. I was also looking at the Math for Electrical Engineering & Computer Science that I think you mentioned in an earlier comment.
The Fall 2010 version of MIT 6.042J Mathematics for Computer Science is a good one (as I recall there are several versions online from different years), and someone from Google is teaching it this year (haven't heard if the videos will be made available online):
I spent my first 5 years after university as an employee because "everybody has a job" and it never occurred to me that I might freelance or start my own business.
Doubly pitiful because I didn't need that much income back then, and later it became much more complicated to stop being employee due to higher cost of living in a place I moved to and due to need to support the family.
No, but more than one can work wonders for productivity if you are looking up documentation while coding, so you don't have to constantly alt-tab and forgetting where you were.
I think this one is really a horses-for-courses situation. For me, my current setup of two 2560x1440 monitors plus a 15" laptop is absolutely optimal - I can spread a load of stuff out everywhere, flip between a few spaces, and just generally be super-productive. I can still be productive working from just the laptop in a coffee shop, but I feel very constrained and find myself shuffling windows around to get stuff done (even just writing an email whilst referring to something else), and I generally don't like it and pine for my multiple monitors, especially if I'm doing any sort of development work.
Other people seem to be able to be just as productive from a single monitor or laptop, and tabbing between things is very much just part of their workflow - I remember at one gig we bought a couple of displays for one dev and he never even bothered plugging them in.
(I don't think anyone really "needs" five monitors, though.)
To chime in anecdotally as a one-screen guy, for me it’s all about being able to unplug my laptop, switch locations, and not feel like my workflow’s crippled. I don’t like feeling anchored to my desk, or even the office as a whole.
My productivity probably doubled when I started using the 4 window layout of sublime. I didn't realize how much mental overhead I was spending just flipping between open files.
I think alt-tab is just a poor mechanism. With fast-switching virtual desktops, I find that the advantages of multiple monitors are quite small, though I still prefer to have two.
For anything you want to learn in your life the best thing you should learn:
Learn from the Masters, do not seek advice in random people on Internet.
It is called "modeling". Pick your masters-mentors carefully.
For example we have people in my company that could do the work of tens or hundreds of people just by themselves because:
-they are incredible experienced and smart.
-they know regular expressions, know how to automate everything.
-they understand good design, and understand people as well as computers.
Just living around this people you will start modeling them, and doing the same they do.
But people also model bad behaviors just by proximity. Fat people use to have friends that are fat, and family members that are fat. The same happens with smokers or drug addicts. You pick from your surroundings subconsciously.
For example someone here recommends you to not use vim, use an IDE. Well , the masters I know either use vim or emacs a lot. They can use other tools like IDEs too, but instead of depending on someone else for automating the stuff they need they use to make their own tools.
Remember that no big company like Microsoft, Apple or Google are going to make your life easier for making your software multi platform for example, but for your company it could be essential. On the contrary, if you only use their tools your life will be miserable if you go against the interest of those companies.
Seek out well written projects and contribute to them, you'll learn far faster the principles of good software than working alone or on badly written code.
Focus on maintainability above speed, terseness, or perfection. Worse is better (<-- it took me many years to truly understand this expression)
Your plans/thoughts on any matter are strictly inferior to your thinking after putting it into practice, so do not plan too much. Instead, experiment.
I think there is value in working with poorly written code.
You will learn all the details of the language, because you will see many edge cases. You will learn about all the bad ideas (aka anti-patterns) so that you can avoid them. You will learn that comments are lies and that there are things that you can't trust. And hopefully, you will learn how to turn bad code into good code without introducing more bugs than you have fixed.
If you end up coding for a living, you are more likely to see really bad code than good code. Not only because though budget cuts and the occasional mess up by incompetent coders, but also because good code simply works, and there is no need to touch it.
I totally agree with the "experiment" part though. Everything you learn is worthless unless you actually code something, preferably something useful.
On a more technical point, I think that terseness is undervalued. The first rule of good code, for me, is "write short code". There are a few exceptions but these are just that : exceptions.
I was writing from my own regret. I feel that I learnt surprisingly little by spending a decade rewriting badly written code. I should have found out what encapsulation was years earlier. Sure, there are lessons there, but I think you will pick most of them up anywhere; you're always exposed to code you don't like.
I still value terseness highly, as in my opinion it's strongly correlated with readability (though not at all when taken to extremes), but I very often have to increase verbosity to increase maintainability. E.g. don't take shortcuts that will leak through abstractions.
If you can afford the time cost, type out the code you were going to copy/paste from somewhere else. You'll get a better feel for what's happening as you type, allowing you to assimilate style conventions in with the rest of your codebase. The few typos you will likely make will force you to understand how exactly the code achieves goal.
2. The design and specification stages of programming are not pointless enterprise makework, they are genuinely useful.
When writing a program, make sure you understand what you're trying to achieve in detail, and write a plan about you intend to do that.
If you don't know what you're trying to achieve (and by extension your customer/client doesn't either), schedule in some research / exploration work as the first part of your plan.
Once you have a better idea, redo your plan (and maybe your budget) and check it with your customer/client.
3. If something is going wrong, or you think something is going to go wrong, it's much better to tell someone as early as possible. Even if it's your fault, don't sit on your shame - get some help.
For example, if you're going to go over budget, you may be able to get some more budget. Or you may be able to get some features cut.
The vendor's way of doing things is just one of many possible ways, the vendor is often not right, the vendor often has their own agenda, the sky will not open and smite me if I don't go the vendor's way.
You will write tests, because without them, you cannot refactor.
With them, you will be able to quickly and easily make your code better and better, with much less risk of regression.
As a result, the initial implementation details of your code are not very important, and you don't have to spend much time thinking about how the code might evolve in the future (which is unknowable anyway).
PS Don't spent much time testing the internals of your code. Check that the inputs and outputs are correct, as thoroughly as you can. Testing internals actually makes refactoring harder.
Smalltalk plus the book about patterns. That would have been a game changer. It happened to me but years later and largely improved my coding quality on all other tech stacks
That really depends on the language I think. Java/C# etc I wouldn't dare to touch without an IDE. Most of my Clojure/Lisp work is done in Emacs (with Evil for Vim bindings, without any syntax highlights). People /still/ think I'm goofy for preferring that. But, I find that if you think of programming as "creative writing" instead of engineering you enter a different, more flow-like, mindset. The IDE and all its billions of buttons just get in the way for me, while just writing s-expressions as trees (using paredit with agressive-indent) never breaks my train of thought.
This advice is of such low caliber and so out of touch it's almost as if you didn't even bother to open the submission before posting. It's practically useless to anyone, especially when you don't even attempt to give some motivation for it.
It's over-generalized, likely because you know only one thing or you've developed into doing only one thing and for that one thing vim is the worse choice.
Lurk GitHub repos for solutions that you've done some time ago. It could clearly show what you've been doing correctly and what you could improve.
Then, if you see some place for improvement in some repo, make a pull request. Great way to learn, period.
I wish I had learned many programming paradigms; I only started experimenting with functional programming a few years ago. Imperative/OOP dominated every college course and book I read.
Personally, I've found that getting that just getting a book on the technology as a resource accomplishes two things --
1) Read the first chapter to get a fair understanding of how the technology works (recently getting them for both Spring security and Hibernate search helped emensely) and;
2) A way to sanity check your stackexchange solution you'll inevitably refer to.
the moment I discovered how to use for( $i = 0; $i < 100 ; $++) that was quite a moment, how to use loops properly was something I wished I knew earlier.
Staying at work longer won't help you produce more and better code. Sleep and exercise will.
People who get angry about technical choices like what framework to use or what coding style or how tightly to enforce rules will flame out. Don't be that person.
If you think you need to rewrite it from scratch, think again. Look up Chesterton's Fence.