Hacker News new | past | comments | ask | show | jobs | submit login
Things to do before and after you write code (somehowmanage.com)
117 points by Ozzie_osman on Sept 19, 2021 | hide | past | favorite | 53 comments



Missing from "after" is something that contributed to my career success – when you encounter problem, and you will, many, when you (small bugs etc) or your team (larger issues) solves it - stop and play it from the beginning in your head. Go through every step that you remember that you or collegues went. Pay attention to what you should have done instead, what you should have tried, now that you know where the problem was. What were the obvious things to check. What was waste of time. What kind of binary search you should have done. This alone can boost your performance above average quickly. And the weird part is that almost nobody does it; just do it and you'll see.


Great advice! It is similar to something I have been doing for close to 20 years now: whenever I come across a really tricky bug of some kind (usually one that got all the way into production without being caught), I write down some notes on it. What the problem was, what we did to find it, and what we did to fix it. Most inportant is the "Lessons" part (a few sentences) - what should be done to avoid these kinds of problems in the future.

I've written about what this looks like:

https://henrikwarne.com/2016/04/28/learning-from-your-bugs/

and I have also written about general lessons extracted from all these notes:

https://henrikwarne.com/2016/06/16/18-lessons-from-13-years-...


The per-bug write-ups seem really useful. Have you used them for internal training or just for your own reference?


I've done the same thing, but for more than just bugs. Pretty much any time I learn something while working that I think might be easy to overlook in the future, I write it down.

Some of my findings end up in internal training if it's something the team needs to know. If it's something for me, I'll just keep it in my own notes for the next time I encounter it until it's burned into my memory.


It is mostly for my own reference. However, almost every time I come across a bug like this, I will be discussing it with people in the team. It's often hard to keep quite about it: "You wont believe what just happened..." kind of a thing.


Good retrospectives are the key. Using cause-effect diagrams and techniques like “story of a story” help facilitate good discussion. 5 Whys as well.

I try to get teams I work on to retro on all defects that escape to production.


Agree with this - I often write little Gists mostly for myself but every once in a while others will find them useful and some have even ended up being published publicly, eg this one about mistakes to do with RDS

https://medium.com/expedia-group-tech/database-pointers-73e4...


I found that even the concept of doing the "binary search" when debugging is an acquired skill. I often see people facing a vague (in a sense that it could be anywhere e.g. among multiple components or services) issue either think by analogy to previously seen issues without verifying; or offer various hypotheses that are random and unlikely/unwarranted (from "what if we just restart it" to "maybe RAM is defective and is corrupting the data"); or jump on a random correlate that may be spurious or actually a side-effect of the issue itself, without verifying it; or just get stumped. Just forcing yourself to think "how can I narrow it down [whether it's to one of many distributed services, or to 20 lines of code out of 200]?" / "what is the next piece of information I need?" greatly improves your debugging/live site skills.


Excellent advice. That trains your mind to avoid getting stuck and get un-stuck in the future.

When I get stuck on a problem, I'll imagine how I will soon be thinking back on what happened, after I solved the problem, hitting my fist on my forehead and exclaiming "why didn't I think of that?" It jogs me out of the rut and makes me remember other times I had to think "out of the box" to get around a problem. Often it's something I wouldn't normally think of when approaching the problem directly from the front, like something in the environment, or a broken tool, or an incorrect assumption, or a misunderstanding. So I imaging myself past the problem, looking back in the rear-view mirror at how I solved it with the 20-20 vision of hindsight. Simply reminding myself of other times I've gotten out of a rut in the past dissolves my present frustration and helps me get out of the current rut, even if the causes and effects are totally different.


It seems to me that the “ship it” and “move fast and break things” eng culture of the previous decade or so is slowly giving way to a new culture of “be careful, thoughtful and get it right first time”. Code reviews, once relatively relaxed and informal, now take weeks.

I don’t mind the premise of “think things through before coding”, but I’m a pragmatist and I learn most from doing, and this current mentality is starting to feel stifling.

Whatever happened to “less talk, more action”?


At least from my view this is a shift towards actual engineering culture. You'd never see a civil engineer or most any other engineer operate under the "move fast and break things" philosophy.

Making prototypes and experimenting is one thing but something I consistently see with the "move fast and break things" philosophy is that production is your prototype. Sure things may go through testing first but at the end of the day features get pushed out to prod once they have an MVP and are iterated on and beta tested more or less live.

I'd love to see SWEs be more willing to sit down, make a prototype, and toss the entire thing away taking the lessons learned to build the real thing rather than trying to build the actual product off the bat or trying to send the prototype straight to production.

All this extra review and rigour is a good thing for anything going to production. The real problem is that we don't build our prototypes outside of production any more. Build your prototype on your own fast and loose but anything that actually gets near the master branch should be inspected and picked apart to the highest of standards.


> You'd never see a civil engineer or most any other engineer operate under the "move fast and break things" philosophy.

Because people will die. Software engineers who, say, work in the airline industry don't do this either (they also probably use ada or something like it).

I'd also be willing to bet a significant amount of money that an engineer that works on something like soft drink bottles moves a lot faster than one who designs bridges.


Exactly: there's no one true way, there's just the acceptable tradeoffs.

Which is the proposal I tend to write the most when businesses send a requirement my way: "X is important. Ensure X." - whatever X is, you can usually then pretty quickly outline how to do X perfectly, and how much X will cost, and suddenly there's an acceptable degree of X (which is usually outlandishly far below the implied standard of the original request) because as it turns out, they simply didn't think X had any real cost associated with it.


We were in a gold rush. No point building a boom town to last. With the gold rush coming to an end people want quality again. Actually it makes me wonder if startups will stop being a thing too.


Yeah I hate this. Speed of delivery is way more important than having a developer spend 80% of their time doing QA.

I don't want to pay developers to find a tester, watch them use the new feature using full story/posthog, etc etc. That 1) delays the feature because it still needs to get formal QA treatment and customer feedback. And 2) halves the volume of work an engineer can actually deliver.


I think that when you split a task among multiple people, important context gets lost, and you run the risk of making a bad product. Everyone can do well individually, but taken as a whole, the work product falls short. Having a detailed understanding of the problem and solution space is crucial to making something good, and getting that understanding is expensive. Individual output in a very small area isn't a good metric for productivity, the results of that output are what's important.

For that reason, I think it's important for individuals to own things from idea to implementation to maintenance. You know how best to test your software. You know why you're getting paged for it breaking. You should handle that.

I think the loss of quality when splitting up projects among multiple teams (or even individuals) is why startups produce so much more than huge corporations. (My personal view is that output scales logarithmically with the number of people involved -- to do a project that's 4x more complicated than what one person could do alone, you'll need 2^4 = 16 people. Part of that is all the communication required to hand ideas over from product to engineering, hand software from engineering over to QA, hand maintenance over to SREs, etc. You can skip all that and do more with less.)


A couple years back the team I was on was behind schedule on a project. We needed to complete the project in 2 months and we expected we had 6 months of work to do. So we worked nights and weekends, we had 12 hour calls in which the managers were constantly checking in on. Crunch time.

During that time we all worked as one. Focus on one workflow, get it down, move to the next one. We finished on time, and the knowledge sharing that occurred during those 2 months were more than years of knowledge sharing than I had previously.

Once we completed the project, we went back to the old style. Productivity dropped like a rock. But during CO-VID we have had glimpses of that crunch style that come when everyone is on a call. Completing a story is so much better when the team completes that story.


Where do you work, so that I can make sure to never work there?


I think a developer doing their own QA is far better than them not doing it. How are they not just throwing over garbage if it hasn't been exercised meaningfully (typically an automated test won't be as good as a manual test but maybe it can be). In the bigger picture, going "slower" is probably better than being worried with "speed of delivery" which then takes longer to actually be done with the level of quality that would make the product "good".


The article says the engineer should go out and find another person to use it in front of them!

A quick local build and cursory click through can be okay but should they be doing the above for every contribution?


Great point for a deeper discussion. When it comes to deciding when to move fast & when to think more carefully, the answer to this question feels very nuanced and problem domain specific than the aphorism suggests.

More action has led to painfully bad engineering decisions in some working startups i've joined that leads to attrition & perhaps years of refactoring. Sure, the argument could be made, "well that's the learning!" you moved fast and realized it wouldn't scale. And in some cases, you're right and in other cases you wished you hired someone better for the job.

Perhaps, then, to minimize the pain of the future, the less talk more action mentality only works with better engineers who make better decisions. I don't have any data to back up this claim but it sure feels like better engineers who are successful are better at making fewer mistakes when wanting to move fast.


“Understand why you’re building it. If you’re building it, there should be value to your users and the business. What is that value? Do you agree with it?”

Is that really your concern? If you are following the spec, it should be irrelevant - sometimes as an engineer you are not privy to how things will integrate until later, unless you have a real concern, the “value add” is irrelevant.

I dislike these “non article articles” for exactly this reason, it just is not reflected in reality

Edit: The whole article just annoyed me, I don’t know why, so maybe I’m just snapping at things


As a former developer, who now mostly writes those specs... Yes. Yes, I want my team to understand why. They do better when they understand. They have insights into what choices to make when writing the code. We have better conversations about the product and the work to enhance it. And they correct my errors. I'm not perfect just because I write specs, and I appreciate most of the suggestions and corrections that the devs bring to me.


I can't count the number of times I've done these exact steps:

- see a feature request that sounds weird.

- ask what problem they are trying to solve

- come up with a less weird / more useful solution.

As engineers we have information and skills that designers and pms often lack. You can just build what you're asked to build and you'll be fine, but understanding the value that's added will make sure you build it better.


> Is that really your concern? If you are following the spec, it should be irrelevant - sometimes as an engineer you are not privy to how things will integrate until later, unless you have a real concern, the “value add” is irrelevant.

I have the opposite view, I think it matters a lot. As an engineer, I understand things better when I can see the "why". Being aware of the "why" means that I will follow more easily the spirit of the spec instead of just the word. It means I will be more aligned with the users, the business. All of these avoid the classic trap of having a dev team disconnected from the business, which usually leads to a bad atmosphere, late/failed projets.


> follow more easily the spirit of the spec instead of just the word.

It also allows you to use your experience and expertise to tell the writer of the spec that they have written a pile of junk and that what they are asking for is not what they need. Sometimes when I have done that I have also discovered that it was not what they wanted either but simply what they thought was practicable. Together we were able to thrash out a better spec that achieved the ends that were necessary.

I wish that people would remember that the vast majority of people writing code are doing it inside the same company or even department that will be using the final program. It is often the case that at least some of the software developers in such teams have as much or more expertise in the domain as the customers commissioning the software; it would be a dereliction of duty to fail to question a specification that seemed pointless, inconsistent, or inadequate to the problem it was purported to solve.


I agree. I noticed the same as you, when I talk to people; sometimes they don't know what's easy, what's hard or even what's possible to do with code. Talking to them and trying to understand what they're trying to do often leads to a better result.


Is that really your concern?

That surely depends upon both the individual’s role and the culture in the organisation, but I tend to agree with the original author. As a developer, I can often provide better results if I’m aware of the context and not operating in isolation. As a manager, I get better results from the people working for me if we’re communicating openly and often enough to address any questions or problems quickly, and this applies to software developers as much as any other field.

Of course this matters more as you get more senior as a developer or perhaps start to take on hybrid roles like tech lead or software architect. However, in my experience the principle works very widely, except maybe for juniors who have enough to learn already and new starters who are still finding their way around.

No doubt there are some projects where for legitimate security reasons there is a culture of need-to-know and compartmentalisation of information, but I suspect that across the industry as a whole, variations of wilful ignorance among developers do far more harm than good. It happens at large scale, like pigeonholing developers into narrow roles and trying to make everything possible into someone else’s problem. It happens at smaller scales, like being too dogmatic about pretending we have no idea what we’re going to be working on a few days or even a few hours later and stubbornly refusing to make any allowance for it in code we’re currently writing.

That kind of culture can become toxic and unproductive, but it can often be fixed by sharing information more freely with those who might find it useful and then having those people take the extra context into account when making their own plans and decisions. A lot of the points in the article here could be specific examples of that general idea, and IMHO those are all sensible advice.


Possibly not, but the article annoyed me.

I think I agree with a lot of your post, but I do still disagree that everybody needs to be aware of the end goal.


A lack of awareness of the purpose of work can often cause the work to be done in a way which harms the actual objective. This is because the internal group doing the work (without any insight into why or to what end) will have its own objectives and incentives that are distinct from or in conflict with the overall objective. In order for an internal group like this to function blindly, their incentive structure needs to be properly established by some other group with proper insight. So even if the people doing the work are blind to the purpose, the ones establishing their measures and controls cannot be blind.

This can work, and often seems to work well enough, but it also can fail spectacularly. See the recent Ask HN discussion on Google's 50 billion messaging clients.


Understanding why you're doing something at the strategic level, helps you make decisions at the tactical level without having to go back up the chain every time. Without this knowledge, someone else has to make decisions that you are best placed to make.


The how and the why are what separates the garden variety codemonkey from the staff/principal levels.


Yep,if you can do the how, then you’ll learn the why


Hum... Are you talking from the point of view of a developer working in a feature fabric?

If so, ok, your work will become much better if you ask those questions, but the party paying you does not care about it at all. You are just expected to deliver this feature.

But if you have any amount of goal alignment with the people that will get the software (as little as wanting them to perceive your work is useful), you will get much better results by having a picture of what your software is supposed to do. If you do not have access to this information, it's relevant to be aware that you are not receiving all the tools you need to succeed.


Sorry, I don’t know what feature fabric means?


Probably "feature factory". A group that exists just to churn out what they're told to do, versus a group that works with their customers to build what the customers actually want/need.


Yes, feature factory. Thanks. That was my native language pushing itself over English for a moment.


I particularly don’t believe that it is possible to have a spec which does a good enough job of explaining what is needed that the implementor doesn’t need to know why. I’ve certainly never encountered such a thing, and all of the best specs I’ve worked with explain the why as well. It’s like claiming that your code doesn’t need any comments saying why things are done because you can see what is happening.


I had a thought about this regarding not seeing users using what I built. I generally know what it's for (medical) but I just make it, it gets released, cool onto the next one. Only time I hear about it is if something is broken.

On the other side I like seeing metrics/feedback of something I made get used. Always sad when a venture dies/nothing happens to it.


Many shops are optimized to make the kind of thinking and organization described in the article impossible. They keep throwing developers into fire after fire, leaving no time for any reflection and growth. I was surprised to see several representatives of this camp pop up on in the comments here.

If as an engineer these people give you the creeps, remember, you don't have to work for them. If you do currently work for someone like that, you can get out and keep your sanity. The best places to work are not sweatshops.


UK defence has the concept of 'lines of development'. When creating a new capability, you need to think through

* the training needed to use it

* the equipment it will require

* the personnel needed to operate it

* the policy associated with its use

* the information it will require

* where it will sit in an organisation

* the infrastructure required

* how it will be supported through-life

And some others. These are a great checklist of things that if not considered can cause a system to fail / be unused.

Also... Before enlightenment, fetch water, chop wood. After enlightenment, fetch water, chop wood.


Focus on the problem at hand and don't chase technical debt unless it impedes you. If not, open an issue, leave a TODO, etc. Nothing bothers me more than finding a bunch of changes in a pull request completely unrelated to the scope of the issue, and they're most often a time sink for the developer chasing them.


Some jobs don't allow for time to really address tech debt. Lots of us get in the habit of sneaking improvements in as we go because otherwise they never get done.


Yes. If I am doing refactoring work during a sprint, I don't sneak features the business is pushing for into the pull request. I am both held responsible for code being kept maintainable and almost never given any time to keep code maintainable, so I refactor as I go.


Good code is produced when people can easily improve it. In my experience, people leave too much crap untouched because of unnecessary barriers, and having to create a new issue is definitely one of them.

I'd rather review 10 extra files every PR if it means the code is improving. If you keep doing this, eventually there won't be so many extra changes because they're fixed already.

I'm curious, why do you think fixing tech debt is a time sink?


I see issues like contracts--they have a scope and you are agreeing to work on that scope. If you go beyond the scope, you're breaking the contract. Someone might already be fixing the tech debt via a different issue, you may need to submit a new RFC, etc. Management and PMs will also start to notice and not appreciate the risk that your fixes might cause to the release.


I think the point is that these tech debt changes are often not really making the code better. Also, when the PR has other things going on, then it's harder to properly give a CR on the actual rason the code was changed. Sure, sometimes a person is able to remove a bunch of unnecessarily complex code or what not as part of the change, which I think are generally good.


I'd say there's one extra preliminary step missing:

Before (0): Understand if it needs to be built at all


Also: "Has this been invented before?" // "How shall mine be better than the dozen already created?".


but how will I make principal if we dont build our own in-house NoSQL database? :)


I was expecting to learn which purification rituals to do before and after interacting with the box of demons. Disappointed


There are some huge missing "before" items: document what you're about to do (possibly also in plantUML so it can easily evolve together with the code), ask for efficient pair reviews (about your strategy, your implementation, your observability plan, etc)...


Have a drink.

Have a smoke.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: